Fake News
Why deepfakes might become the next big weapon in a digital society
By Breanna Mroczek
If you’re one of the 7 million people who have watched the YouTube video “Bill Hader channels Tom Cruise,” then you already have a passing familiarity with deepfakes. In the video, actor Bill Hader seemingly, and seamlessly, transforms into Tom Cruise and Seth Rogan during an interview with David Letterman. The video is incredibly well edited and convincing, but it’s, of course, not a real recording of something that actually happened.
Likewise, in May 2019 a very convincing, but entirely AI-generated, voice recording of comedian Joe Rogan circulated the internet and stunned users with what was being referred to as the most realistic voice clone to date. The audio deepfake was created by machine learning engineer Joseph Palermo and some colleagues at Dessa, an artificial intelligence (AI) startup company that works with large enterprises to apply AI and machine learning to optimize their operations, especially with predcting customer trends. Palermo and his colleagues decided to challenge their machine learning skills by attempting to emulate the personalities of already well-known people and, as indicated by the response to the Rogan clip, they were successful.
But deepfakes are more than just a tool for entertainment. Unfortunately, they hold some troubling real-world risks and challenges.
What are deepfakes?
Palermo identifies three types of deepfakes: visual (such as face-swapping), voice generation/speech synthesis, and written/text-based.
“In all of these cases, what you need to build a deepfake is a lot of data pertaining to that person,” Palermo says. With hundreds of examples of transcriptions of what someone has said, paired with audio of their voice saying it, an algorithm can be trained to create an audio deepfake of someone saying… practically anything you want them to. “If you have that data, then you can type in text of something that someone has never said before and have an algorithm create some very believable-sounding audio clips.”
There are, of course, some serious societal implication with algorithms that can make it look and sound as if anyone has said or done anything. Palermo notes that one of the earliest applications of deepfakes was face-swapping in pornography to make it look like someone had appeared in a porn video, when they hadn’t, with malicious intentions.
“It doesn’t take much thinking to realize that deepfakes can be a very disruptive and dangerous thing,” Palermo says. “There is so much havoc that you could wreak with this kind of technology, you could try and ruin someone’s reputation. For instance, imagine a call from someone pretending to be a CEO asking employees in a panicked, urgent voice to immediately transfer a large sum of money somewhere. Or imagine getting a phone call from your friend, and they’re asking you some questions about things you probably wouldn’t talk to anyone else about. You don’t suspect it’s not your friend because, well, it sounds so much like your friend. The list goes on and on of the things you can do that are harmful.” Since deepfakes are a relatively new application, many people are unaware of the potential risks and are unprepared to recognize and contend with deepfakes. Palermo suggests that a good starting point is simply to start getting familiar with deepfakes, anticipating what the risks might be to oneself or one’s company, and then to start preparing for encounters and challenges with deepfakes.
“I want people to be aware that this technology is possible just so that, if they get a weird phone call from somebody that they know, at least that they have this in the back of their mind and maybe recognize what’s happening.”
Mitigating risks
While it’s the norm in the machine learning community to share work and release open source software, Palermo and his team decided not to release the codes they used to create the Joe Rogan deepfake in order to try and prevent the technology from falling into the wrong hands.
Furthermore, just awareness of the existence of deepfakes should make people think twice about what they’re hearing or seeing. Palermo explains that deepfakes pose a risk to the information ecosystem, and the way the consumers receive and interpret news and information.
“People are very aware now that the 2016 United States presidential election was highly manipulated by external forces, and that was done even without deepfakes at all. With upcoming elections around the world, it’s something to be aware of.”
While deepfakes have the potential to contribute to the influx of “fake news” that has proliferated social media and news, their existence doesn’t change anything about the critical way in which we should be reading and analyzing information.
“An infinitely malleable medium of text already exists. Anybody can write a blog post or an essay which is highly misleading or just outright full of lies, and we have ways to deal with this. We look at the source. Where is it coming from? Is it coming from The New York Times, or is it coming from some random blogger? What about the internal consistency of what’s being said? Does it make sense? Is it consistent with the other things that we know? For instance, if tomorrow there was a deepfake of Joe Biden saying some egregious things, and it just came from random YouTube account, I think we would need to be very suspicious. I want people to recognize that we can use some of the same tools that we’ve used for truth that we use in other domains.”
“An infinitely malleable medium of text already exists. Anybody can write a blog post or an essay which is highly misleading or just outright full of lies, and we have ways to deal with this. We look at the source. Where is it coming from? Is it coming from The New York Times, or is it coming from some random blogger? What about the internal consistency of what’s being said? Does it make sense? Is it consistent with the other things that we know? For instance, if tomorrow there was a deepfake of Joe Biden saying some egregious things, and it just came from random YouTube account, I think we would need to be very suspicious. I want people to recognize that we can use some of the same tools that we’ve used for truth that we use in other domains.”
“I don’t want to alarm people to an unnecessary degree,” Palermo says. “Although there are a lot of potential downsides, there are things we can do to prepare, concrete steps that can be taken. There is reason to expect that we will be able to handle the applications that people are most concerned about. I don’t think [deepfakes] will destroy society.”
Joseph will be sharing more insights on deepfakes at TEDxToronto on Saturday, October 26; you can watch the livestream for free on tedxtoronto.com.
The highly adaptable and relatively inexpensive solutions drones provide can be utilized in almost any industry. Amazingly we are witnessing only the start of this technology and how it will impact our lives. While there will undoubtedly be legal and ethical questions along the way, drones will continue to benefit our world and become more and more of a definite game-changer. The next time you see a drone hovering above, think of the almost limitless possibilities of how it is impacting your life for the better.
“It doesn’t take much thinking to realize that deepfakes can be a very disruptive and dangerous thing.”
“I want people to recognize that we can use some of the same tools that we’ve used for truth that we use in other domains.”
Audio Version
Female Full Audio Magazine
Female Voice Audio