Deep Fakes: A Threat to Truth

Business Insider

In most situations, it is assumed that video footage is fact. What is found on a security tape is indisputable evidence, and fiction is best left to cartoons and special effects. Deep fakes, or the use of artificial intelligence algorithms to change a person’s face to that of another, blur the lines of fact and fiction. What was once an easily dismissible fake news headline now is bolstered by video evidence. As the algorithms that create deep fakes become smarter, many question the consequences and of potentially slanderous deep fakes, and legislative approaches to mitigating their harm.

In 2020, a State Farm commercial played on ESPN’s The Last Dance. The commercial appeared to showcase a 1998 clip of an analyst from ESPN making an accurate prediction about the year 2020. This clip was a deep fake. The clip was generated with the help of artificial intelligence software. Viewers likely realized that the clip was fake, but they might have not considered the ethical implications of the video and subsequently, all deepfakes.

At the beginning of 2019, the number of deepfakes on the internet nearly doubled over the span of three months (Toews 2020). As artificial inteeligence technology continues to improve, this growth will continue. While some deep fakes, such as the doctored clip of the analyst, are lighthearted, malicious deep fakes pose a serious threat to society. One example of this is deep fakes in the political world. Deep fakes can be a powerful mechanism for destroying a public figure’s credibility by distorting their words, as well as spreading false information to the individuals who view them. Deep fakes can cause harm in a plurality of societal spheres, which causes them to be a concern to members or society.

There are steps that tech firms, social media platforms, and the government are taking to alleviate this problem. Facebook teamed up with researchers to create deep fake detection software. This program DeepFace identifies human faces by employing a nine layer neural network trained on over four million images to identify deep fakes with 97% accuracy.

The United States government has been addressing deep fakes through legislation. The 2021 NDAA, which recently became law, requires the Department of Homeland Security to issue a report on deep fakes every year for the next five years. The reports detail how deep fakes can be used for harm. The Identifying Outputs of Generative Adversarial Networks Act was signed into law in December 2020. As a result, deep fake technology and measures of authenticity will be researched by the National Science Foundation (Briscoe 2021).

As technology continues to improve, deep fakes will become more advanced, likely becoming indistinguishable from real video. Their potential harm needs to be addressed by all levels of society, from governments they attempt to distort, to viewers they manipulate, and social media platforms they use to spread harmful misinformation.

Deepfakes and the Spread of Misinformation

Deepfakes began to garner attention in 2017, when a video of Jordan Peele posing as former President Barack Obama went viral. In the years since, the disturbing potential dangers surrounding deepfakes have come to light, affecting people around the globe.

What is a Deepfake and Voice Cloning?

A deepfake is a form of artificial intelligence that replaces an existing image or video with the likeness of another person’s face. Deepfakes can be as harmless as someone humorously impersonating a friend. However, recent trends have shown the use of deep fakes in political spheres, where they can be used to spread misinformation.

Voice cloning is a sect of deepfakes that focuses on replicating someone’s voice. This has been used more often and has been introduced by various companies as a fun gadget tool. Adobe’s VoCo is at the forefront of this and after hearing your voice for twenty minutes, the technology can replicate your voice. Adobe is researching how to make a watermark detection system in order to prevent forgery.

What are the Potential Consequences?

Though we have not seen deepfakes being used to cause misinformation on a global level, it has been used today against regular people and voice cloning is the most popular among deepfakes being used. Many big corporate companies, Adobe and Respeecher, have been developing beta technology that can replicate voices. In real life, this technology could be used to alter one’s voice to replicate and impersonate public figures.

As of recently, voice cloning was used in a memorial film for the late Anthony Bourdain. They used voice cloning for the movie as a narrator’s voice. His consent for this was never clearly stated and they started this project after he had passed. Many people were quick to point out how he never said the lines and how “it was ghoulish.” This brought up the new question of whether it was ethical to clone the voice of someone when they have passed or can not give consent. In the case of Bourdain, many on different social media platforms decided it was unethical while many close to Bourdain raised no complaints regarding the use of his voice.

Deepfakes have also been used in other unethical ways. An example of this is the case of Noelle Martin, an Australian activist, whose face was deep-faked into an adult film when she was 17. Her various social media accounts were used as a reference to digitally steal her face and put it in these adult photos and videos. She tried to contact various government agencies and the companies themself but nothing worked. The person behind this was unknown which made it virtually impossible to track them down and thus nothing happened.

What is being done?

Researchers at various different institutions are using different methods to identify the use of deepfakes in technology. At the University of Albany, Professor Siwei Lyu has worked on detecting deepfakes by using “resolution inconsistencies.” Resolution inconsistencies occur in deepfakes when a swapped face does not match up with its surroundings in the photo or video. These inconsistencies help researchers develop methods to detect deepfakes.

While at Purdue University, Professor Edward Delp and former research assistant David Güera are trying to use convolutional neural networks to detect deepfakes in videos. The neural network works by using “frame-level inconsistencies” to identify deep-fake videos. Frame-level inconsistencies simply means the inconsistencies created when deepfake technology puts someone’s face on another. They are using sets of deep-faked videos to train their neural network to identify them. The researchers want to exploit this in order to identify deep-faked videos created by popular technology.

Researchers at UC Riverside and Santa Barbara use two different methods, CNN and LSTM, to see how well they identify deep-faked media. A convolution neural network, CNN, is at its most basic definition is a type of deep learning algorithm that is trained to differentiate photos from others using specific aspects of some photos. In terms of deepfake identification, it can be used to find these inconsistencies mentioned above. LSTM based networks are a part of this process because according to the research at UC Riverside and Santa Barbara can help with classification and localization with the processed media. This is to help organize the wide database of media in order to find results more easily.

They are testing them based on how well they can identify the inconsistencies present in these videos. They concluded in their research that both the Convolution Neural Network (CNN) and LSTM based networks are effective in identifying deepfakes. Looking into the future they would like to see both of these methods combined.

Other than research, public advocacy in the law is another way to help stop deepfakes. The Malicious Deep Fake Prohibition Act of 2018 establishes a new criminal offense for the distribution of fake online media that appears to be realistic. Though it was not passed it would have helped make strides in the right direction in this field and helped many who have been wrongfully affected by these technologies.

Deepfakes and the Spread of Misinformation

Webroot

“Yellow Journalism” is a phrase coined during the Spanish-American war, to describe reporting based on dramatic exaggeration, and sometimes flat-out fallacies. With its sensational headlines, and taste for the theatrical, yellow journalism, or as it’s better known as now, “fake news,” is particularly dangerous to those who regard it as the absolute truth. In the days of Hearst and Pulitzer, it was much easier to know what was fabricated, and what was fact. However, in this age of technology, yellow journalism isn’t so “yellow” anymore. It’s the jarring blue glow emitted from millions of cell phones aimlessly scrolling through social media; it’s the rust-colored sand blowing against Silicon Valley complexes where out-of-touch tech giants reside; it’s the uneasy shifting of politician’s black suits as their bodies fail to match up with their words; it’s the curling red lips of innocent women, their faces pasted onto figures entwined in unpleasant acts.

It’d simply be a redundancy to say the internet has allowed misinformation to spread, but it’s more necessary than ever, to examine the media you’re consuming. Deepfakes, which are artificial images made by overlaying someone’s — usually a famous public figure’s — face, so they can be manipulated to say anything, have begun to surface more and more recently. In conjunction with deepfakes is artificial intelligence, or AI, which is when a machine exhibits human-like intelligence by mimicking our behavior. This includes things such as recognizing faces, making decisions, solving problems, and of course, driving a car, as we’ve seen with the emergence of Teslas. AI has been particularly eye-opening in revealing just how much trust we put into mere machines, and deepfakes are a perfect demonstration of how that trust can so easily be shattered.

When you search “deepfakes,” one of the first few results you get are websites where you can make your own. That’s how easy it is. The accessibility of such technology has long been seen as long been seen as an asset, but now, it’s like Pandora’s Box has been opened. Once people realize virtually anything is possible, there’s no end to the irresponsible uses of the internet. However, legally, many of the deepfake scandals can be considered defamation. A case recently came to light in Bucks County, PA, where a jealous mother created deepfakes of her daughter’s teammates, intended to frame them for inappropriate behavior. Police were first informed of this when one of the teammate’s parents reported their daughter had been harassed with messages from an unknown number. The messages included “pictures from the girl’s social media accounts which had been doctored to look as if she was naked, drinking alcohol, or vaping.” These photos were intended to get the girl kicked off the team. Fortunately, police were able to trace the IP address, and arrested the perpetrator. She now faces three misdemeanor counts each of cyber harassment of a child, and harassment, proving just because an act is done “anonymously” via the internet, doesn’t mean you can’t get caught. In fact, the internet provides just as many opportunities for conviction as it does narrow escape. As technology becomes more and more apt to cause damage, cyber harassment is considered a serious crime. If convicted, the mother faces six months to a year in prison. Pennsylvania has active anti-cyberbullying legislation in place, that emphasizes how authorities have the right to interfere in instances that occur off school property. The state makes cyber harassment of a child a third-degree misdemeanor, punishable through a diversionary program.

Women have frequently been the victim of sexual violence via the usage of deepfakes. For example, “pornographic deepfakes exist the realm of other sexually exploitative cybercrimes such as revenge porn and nonconsensual pornography.” According to the Fordham Law Review, one journalist described deepfakes as “a way for men to have their full, fantastical way with women’s bodies,” emphasizing how this is still a sexually abusive act, as it demeans and reduces women to nothing but fantastical objects. Additionally, with the uncertainty of how many of this new technology works, it’s easy for these videos to be leaked, and a woman to have her reputation ruined over something she herself never did. Deepfakes have been used to intimidate and invalidate powerful women as well; men who find themselves threatened by a woman’s advance in authority may see this as a means to bring them down.

In 2018, Rana Ayyub, a successful, budding journalist in Mumbai, fell under scrutiny after a deepfake of her face superimposed on a porn video came into circulation. The stress from the endless harassment sent Ayyub to the hospital, and she withdrew from public life, abandoning her aspirations of working in media. Forty-eight states as well as D.C. have laws against “revenge porn,” but there’s still limitations against prosecuting websites that distribute this content. Section 230 of the Communications Decency Act is a federal law that protects websites protection from prosecution for content posted by third parties. Luckily, this immunity goes away if the website or webmaster actively becomes a part of distributing the content. Additionally, most states impose a fine, and/or a prison sentence for the distribution of nonconsensual porn by a citizen. Federal legislation to address the problem of deepfake pornography was introduced in 2018, and it was called The Malicious Deepfake Prohibition Act of 2018. Unfortunately, this legislation didn’t advance, proving there’s still a long way to go in administering justice to victims of this heinous crime.

Most detrimental to American life as a whole — especially given our fiercely divided nation — is the use of deepfakes to spread political misinformation. With former President Trump’s social media presence considered a hallmark of his presidency, and the majority of citizens having access to the presidential briefings on TV, our elected official’s ideals are more available than ever. However, America has always allowed itself to be swept up in illusions. In the very first televised debate of Nixon versus Kennedy in 1960, Kennedy was widely believed have been given an automatic edge because of his charisma and good looks. In this day and age though, it’s crucial our country looks more than skin deep. A video of President Biden sticking his tongue out, and another video of Biden making a statement that was proven to be fake, were both made of intricately spliced and edited audio clips. The second clip was reposted by one of Bernie Sanders’ campaign managers; it showed Biden apparently saying “Ryan was right.” This was in reference to the former Speaker of the House Paul Ryan’s desire to go after Social Security and Medicare. Even within the Democratic party itself, fake media was being used to enact support for a particular candidate, creating harmful disunity. However, change is on the horizon; the National Defense Authorization Act for Fiscal Year 2020 included deepfake legislation. This legislation included three provisions, the first being the requirement of a comprehensive report on the foreign weaponization of deepfakes. The second necessitates the government to notify Congress of foreign deepfake-misinformation being used to target U.S. elections. Lastly, the third establishes a “Deepfake Prize” competition in order to incentivize the development of more deepfake recognition technologies.

In a world where media is so easily manipulatable, it’s up to citizens to be smart consumers. By reading news from a variety of sources, and closely examining the videos you’re watching, you have a better chance of not being “faked out” by deepfakes. Some tips for identifying deepfakes include: unnatural eye or mouth movement, lack of emotion, awkward posture, unnatural coloring, blurring, and inconsistent audio. Many people worry in a world where anything can be fake, nothing is real. But there will always be journalists committed to reporting the facts, and promoting justice rather than perpetuating lies. When the yellowed edges of tabloids crumple to dust, and the cell phone screens fade to black, truth — in its shining array of technicolor — will snuff out the dull lies.