Understanding Deepfakes

By: Soniya Shah

Deepfake technology involves using artificial intelligence and machine learning models to manipulate videos. A deepfake is a doctored video that shows someone doing something they never did. While the technology could be used to make funny spoofs, there are also much darker, more dangerous implications. Anyone can use online software or applications that allow them to create the doctored videos. Creating the videos is easier for those in the public sphere because there is more data available of them in clips and videos than there is for those who are not in that sphere. For example, Donald Trump is a target for deepfakes because of the number of available clips. This makes it easy to create videos of him saying or doing things that never happened. 

All ways of making deepfake videos require using machine learning models to generate the content. One machine learning model trains on a data set (which is why the larger data set makes the videos more believable) and then creates the doctored videos, while the other model tries to detect forgeries. The models continue to run until the second model can no longer detect forgeries. While amateur deepfakes are usually easy to detect with the naked eye, those that are more professional are much harder to suss out. Because the models that generate the videos are getting better over time, especially as the training data gets better, relying on digital forensics to detect deepfakes is spotty at best. 

Should we be worried? There are serious implications for creating a video that shows someone doing something unforgivable. What if such a video was timed right before an election and swayed the results? The other problem is that creating the videos is relatively easy, given the wide accessibility of deepfake technology. Deepfakes are not illegal. There are First Amendment questions because there is a high chance that deepfakes are protected by freedom of speech. However, there are also concerns about national security risks. Some states are already taking steps against the video. For example, Virginia recently made deepfake revenge pornography illegal

In June 2019, the House Intelligence Committee held a hearing on the risks of deepfakes, listening to experts on AI and digital policy about the threats that they pose. The committee said it aims to “examine the national security threats posed by AI-enabled fake content, what can be done to detect and combat it, and what role the public sector, the private sector, and society as a whole should play to counter a potentially grim, ‘post-truth’ future”. The hearing was timely, given the doctored video of House Speaker Nancy Pelosi that went viral in early summer 2019. The video raised concerned that manipulated videos would become the latest technology used to disseminate misinformation. Obviously, it is concerning when viewers can no longer trust a video that appears real, especially when it is of high ranking government officials. The experts who spoke at the hearing agreed that social media companies need to work together to create consistent standards in the industry that prevent the prevalence of such videos. 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

Blog at WordPress.com.

Up ↑

%d bloggers like this: