Tech | Visa | Scholarship/School | Info Place

The Dark Side of Artificial Intelligence: The Growing Threat of Deepfakes

We live in a world where anything seems possible with artificial intelligence. While AI has significant benefits in certain industries, such as healthcare, its dark side has also emerged. It increases the risk of bad actors launching new types of cyberattacks and manipulating audio and video for fraud and virtual kidnapping. Among these malicious behaviors, deepfakes are becoming increasingly common with the emergence of this new technology.

What are deepfakes?

Deepfakes use artificial intelligence and machine learning (AI/ML) technologies to create convincing and realistic videos, images, audio, and text that represent events that never happened. Sometimes people use it unintentionally, such as when the Malaria Must Die campaign produced a video in which legendary footballer David Beckham appeared to launch a petition to end malaria in nine different languages.

However, given that people are naturally inclined to believe what they see, deepfakes do not need to be particularly sophisticated or convincing to effectively spread misinformation or disinformation.

According to the U.S. Department of Homeland Security, concerns surrounding “synthetic media” range from “urgent threat” to “don’t panic, be prepared.”

The term “deepfakes” comes from how the technology behind this type of manipulated media, or “fake”, relies on deep learning methods. Deep learning is a branch of machine learning, which is a part of artificial intelligence. Machine learning models use training data to learn how to perform specific tasks, and improve as the training data becomes more comprehensive and powerful. However, deep learning models go a step further and automatically identify features of the data to facilitate its classification or analysis and train at a more profound or “deeper” level.

Data can include any image and video, as well as audio and text. AI-generated text represents another form of deepfakes that poses a growing problem. While researchers have pinpointed multiple vulnerabilities in deepfakes involving images, videos and audio that aid detection, identifying deepfake text has proven more difficult.

How do deepfakes work?

Some of the earliest forms of deepfakes emerged in 2017, when Hollywood star Gal Gadot’s face was superimposed onto a pornographic video. Motherboard reported at the time that it was allegedly the work of one person — a Reddit user who calls himself “deepfakes.”

Anonymous Reddit users told the online magazine that the software relies on multiple open source libraries, such as Keras with a TensorFlow backend. Sources mentioned that to collect celebrity faces, they used Google image searches, stock photos and YouTube videos. Deep learning involves a network of interconnected nodes that autonomously perform computations on input data. After sufficient “training,” these nodes organize themselves to accomplish specific tasks, such as convincingly manipulating video in real time.

Today, artificial intelligence is used to replace one person’s face with that of another. To achieve this, the process may use encoder or deep neural network (DNN) technology. Essentially, to learn how to swap faces, the system uses an autoencoder to process images of two different people (A and B) and map them into a shared compressed data representation using the same settings.

After training the three networks, in order to replace A’s face with B’s face, each frame of A’s video or image is processed by the shared encoder network and then reconstructed using B’s decoder network.

Now, apps like FaceShifter, FaceSwap, DeepFace Lab, Reface, and TikTok allow users to easily swap faces. Snapchat and TikTok, in particular, make it easier for users to create a variety of real-time actions, requiring less computing power and technical knowledge.

A recent study by Phototutorial showed that there are 136 billion images on Google Images and by 2030, there will be 382 billion images on the search engine. This means criminals have more opportunities than ever to steal someone’s likeness.

Are deepfakes illegal?

Having said that, unfortunately, a plethora of sexually explicit images of celebrities have emerged. From Scarlett Johansson to Taylor Swift, more and more people are being targeted. In January 2024, Swift’s deepfake photo was reportedly viewed millions of times on X before being deleted.

“This is just the most high-profile example of something that has been hurting a lot of people, mostly women, for a long time,” said Woodrow Hartzog, a professor at Boston University School of Law who specializes in privacy and technology law.

It’s a “toxic cocktail,” Hartzog said in an interview with Billboard, adding: “This is an existing problem with these new generative AI tools and the industry’s broader commitment to trust and safety. Backsliding mixed together.”

In the UK, from January 31, 2024, the Online Safety Act makes it illegal to share intimate images generated by artificial intelligence without consent. The bill also introduces further provisions banning the sharing and threats of sharing of intimate images without consent.

In the United States, however, there is currently no federal law prohibiting the sharing or creation of deepfake images, but there is a growing push to change federal law. Earlier this year, when amending the UK Online Safety Bill, representatives introduced the Prohibition of Counterfeit Copies and Unauthorized Copying of Artificial Intelligence (Prohibition of Artificial Intelligence Fraud) Bill.

The bill introduces a federal framework to protect individuals from artificial intelligence-generated fakes and counterfeiting, making it a crime to create a “digital description” of any person, living or deceased, without consent. The ban also covers the unauthorized use of their likenesses and voices.

The threat of deepfakes is so serious that Kent Walker, Google’s president of global affairs, said earlier this year: “We’ve learned a lot over the past decade, and we take the risk of misinformation or disinformation very seriously.

“For elections like we’re seeing around the world, we have set up 24/7 war rooms to identify potential misinformation.”

Featured Image: DALL-E/Canva


#Dark #Side #Artificial #Intelligence #Growing #Threat #Deepfakes

Leave a Reply

Your email address will not be published. Required fields are marked *

Index