Detect Deepfake Videos to Prevent Disinformation and Conspiracy Theories
Currently, deepfakes are largely used by amateur hobbyists for morphing celebrity faces in adult videos, and by unsavory political elements for spreading fake news. However, experts are worried about even more dangerous uses of the technology in the not-so-distant future. So here, we are listing some of the ways to help you detect deepfake videos to prevent you from becoming a victim of malicious disinformation.
What are Deepfake Videos?
A combination of the terms ‘deep learning’ and ‘fake’, deepfakes refer to manipulated media that use artificial intelligence or deep learning techniques to create audio or videos that distort reality. The technology uses artificial neural networks to create hyper-realistic videos that seem to show people saying or doing something they did not in real life. The most virulent examples include videos morphed with the help of machine learning to put words in the mouths of politicians. This is used to create confusion about their policies and influence elections. Another toxic use of deepfakes is morphing the faces of celebrities in adult videos in massive violations of privacy and dignity. Deepfakes have become a massive problem over the past few years and it’s only getting worse with the availability of faster hardware and more incisive software. The technology has gained notoriety in recent times for its extensive use in pornographic videos, fake news, and elaborate hoaxes. However, not all uses of deepfakes are shady, as is proven by the following video which was created by MIT scholar, Alexander Amini, to get a laugh out of his students. It seemingly shows former US President, Barack Obama, inviting students to join Amini’s lecture on deep learning at the university. However, it is a deepfake through and through, as the researcher clearly revealed on his YouTube channel.
How Do Deepfakes Work?
Deepfakes rely on an artificial neural network called ‘autoencoder’, which is used to learn efficient data codings in an unsupervised manner. It is typically used for facial recognition, as well as to find the semantic meaning of words, etc. In the case of deepfake videos, the technology first uses an encoder to train a neural network on many hours of real video footage of the target individual. Then, a decoder reconstructs a new image using key information about their facial features and body posture. This helps the algorithm superimpose the target’s facial and physical features on the person in the original video. A well-known technology in this regard is a specialized class of deep-learning algorithms called generative adversarial network. GAN is often added to the decoder for more accurate results. A GAN trains the decoder and a discriminator in a way that the former creates new images from the source material, while the latter determines whether or not the newly-created image matches up with the real footage. This causes the generator to create images that mimic reality extremely well, because the adversarial algorithm catches any flaws. This makes deepfakes extremely difficult to combat as they are constantly evolving. Any time a defect pops up, it can be corrected automatically through machine learning. As it doesn’t require any human input, GAN has become the ultimate choice for most deepfake creators. However, the technology is complicated and takes much more time and data to create realistic compositions. Also, while GANs are good for synthesizing images, they have a hard time preserving temporal consistency, which means they need human intervention to retain the alignment of images from one frame to the next.
What Are Shallow Fake Videos?
A shallow fake video is a tampered version of an existing real video, created to project a distorted reality. This often includes selective editing, manipulating the speed of people’s speeches or conversations, and even changing the tonality to make it sound like someone is angry, intoxicated, or making fun of a serious issue, when none of that is true. One notable recent case includes the notorious tweaked video of US House speaker, Nancy Pelosi, whose shallow-fake video slowed down her speech to make her sound inebriated. These videos are different from deepfakes insofar as these are real videos manipulated using traditional video-editing tools instead of AI algorithms.
Deepfakes: History and Applications
Photo manipulation techniques were originally invented in the late 19th century. The technology improved steadily throughout the twentieth century before the explosion in AI and machine learning technologies made it a massive problem for netizens worldwide. AI-infused video manipulation techniques have been widely studied by researchers since the 1990s, and many of the methods have since been adopted by filmmakers worldwide. One of the most well-known examples of deepfakes being used in the mainstream entertainment industry was the resurrection of the late actor, Paul Walker, for Fast and Furious 7 in 2015. However, while it took dozens of experts several weeks for a believable recreation of Walker, it now takes most hobbyists with very little coding knowledge just hours (or sometimes even less) to create new deepfake videos using new techniques and algorithms. The phenomenon first entered public consciousness in 2017 when a Redditor used deepfakes to create and post fake porn videos of celebrities.
Dangers of Deepfake Videos
Deepfake videos are a danger to unsuspecting users who may be bombarded with images of a supposed destabilizing event, like a war or terrorist attack that never happened. It can cause resentment and discontent in society, leading to an increase in politically-motivated attacks based on people’s racial, religious, and ethnic identities. The technology could also be used to spread FUDs (fear, uncertainly, and doubt) about natural disasters, causing widespread panic. Experts also predict that if left unchecked, such videos can provoke deep political crises and even disrupt international relations. Another major problem that has already assumed pandemic proportions is the threat against unsuspecting women. Often referred to as non-consensual pornography, deepfake adult videos reportedly accounted for more than 90% of all deepfakes on the internet in 2019. While it started with morphed videos depicting celebrities like Gal Gadot and Alexandra Daddario, it has since expanded to target regular women as part of fake revenge porn campaigns.
How to Detect Deepfake Videos?
Detecting deepfake videos is a job that even experts often find difficult without the right tools. However, researchers at the Massachusetts Institute of Technology (MIT) have come up with several suggestions that can help regular folks tell the difference between real videos and deepfakes. According to them, one needs to pay close attention to the face while trying to check if a video of a human subject is real or fake. That’s because high-end deepfake manipulations are almost always facial transformations. The areas of the face one needs to pay the closest attention to are the cheeks and the forehead. Does the skin appear too smooth or too wrinkly? Is the age of the skin similar to the age of the hair and eyes? “DeepFakes are often incongruent on some dimensions”, say the researchers. Similarly, the eyes and the eyebrows can also be tell-tale signs for experienced deepfake spotters. That’s because according to the researchers, shadows in deepfake videos do not always appear in places you’d expect. “DeepFakes often fail to fully represent the natural physics of a scene”, they say. Another feature that is a dead giveaway is facial hair. Deepfakes might add or remove mustache, sideburns, or beard, but they often fail to make facial hair transformations look fully natural. Same is the case with facial moles that often don’t look natural enough in deepfakes. The size and color of the lips could also give a hint about the validity of a video. The rate and speed of blinking could also speak volumes about whether a video is real or fake. Unnaturally frequent or infrequent blinking could indicate the deepfakeness of a video.
According to MIT researchers, high-quality deepfakes are not easy to detect, but “with practice, people can build intuition for identifying what is fake and what is real”. The researchers also created a full-fledged webpage where folks can upload videos and try to guess whether they are real or fake. You can try out your deepfake detection skills on MIT’s Detect Fakes website.
Deepfakes: Prevention and Legislative Action
Various countries from around the world are already trying to address the clear and present danger posed by AI-infused deepfakes. While China banned deepfake videos back in 2019, the US state of California also introduced similar legislation earlier in the same year to make political deepfakes illegal, outlawing the creation or distribution of doctored videos, images, or audio of politicians within 60 days of an election. Since then, other US states, including Texas and Virginia, have also criminalized deepfake porn. In December 2019, President Trump signed the nation’s first federal law to combat deepfakes as part of the National Defense Authorization Act, 2020. Meanwhile, in India, there are no specific laws on deepfake media. In fact, laws related to artificial intelligence algorithms are sketchy at best. One of the most remarkable uses of deepfakes in the country was seen during the 2020 Delhi Elections, when the BJP’s IT Cell released an official campaign video purporting to show their chief ministerial candidate, Manoj Tiwari, appealing to voters in Hindi, Haryanvi, and English. The problem is, only the Hindi video was real, while the other two were deepfake clips fabricated using the original video to reach a larger cross-section of voters.
Prevent the Spread of Disinformation by Spotting Deepfake Videos
Once the preserve of multi-million dollar Hollywood productions and state-sponsored agencies and organizations, deepfakes have become increasingly more democratized in recent times. This is enabling regular netizens to crate deepfakes using deepfake apps and websites. With the astronomical increase of deepfakes in recent years, being able to detect them is more important than ever before. We hope that the information here helped in giving you a more holistic idea of the technology, the threats it poses, and the signs to watch out for to better detect deepfake videos going forward. So have you ever fallen victim to deepfakes from unsavory political action groups or conspiracy theorists? Let us know in the comments down below.