Real-time deepfake detection: How Intel Labs uses AI to fight misinformation

A few years ago, deepfakes seemed like a novel technology whose makers relied on serious computing power. Today, deepfakes are ubiquitous and have the potential to be misused for misinformation, hacking, and other nefarious purposes. 

Intel Labs has developed real-time deepfake detection technology to counteract this growing problem. Ilke Demir, a senior research scientist at Intel, explains the technology behind deepfakes, Intel’s detection methods, and the ethical considerations involved in developing and implementing such tools.

Also: Today’s AI boom will amplify social problems if we don’t act now, says AI ethicist

Deepfakes are videos, speech, or images where the actor or action is not real but created by artificial intelligence (AI). Deepfakes use complex deep-learning architectures, such as generative adversarial networks, variational auto-encoders, and other AI models, to create highly realistic and believable content. These models can generate synthetic personalities, lip-sync videos, and even text-to-image conversions, making it challenging to distinguish between real and fake content.

The term deepfake is sometimes applied to authentic content that has been altered, such as the 2019 video of former House Speaker Nancy Pelosi, which was doctored to make her appear inebriated.

Demir’s team examines computational deepfakes, which are synthetic forms of content generated by machines. “The reason that it is called deepfake is that there is this complicated deep-learning architecture in generative AI creating all that content,” he says.

Also: Most Americans think AI threatens humanity, according to a poll

Cybercriminals and other bad actors often misuse deepfake technology. Some use cases include political misinformation, adult content featuring celebrities or non-consenting individuals, market manipulation, and impersonation for monetary gain. These negative impacts underscore the need for effective deepfake detection methods.

Intel Labs has developed one of the world’s first real-time deepfake detection platforms. Instead of looking for artifacts of fakery, the technology focuses on detecting what’s real, such as heart rate. Using a technique called photoplethysmography — the detection system analyzes color changes in the veins due to oxygen content, which is computationally visible — the technology can detect if a personality is a real human or synthetic. 

“We are trying to look at what is real and authentic. Heart rate is one of [the signals],” said Demir. “So when your heart pumps blood, it goes to your veins, and the veins change color because of the oxygen content that color changes. It is not visible to our eye; I cannot just look at this video and see your heart rate. But that color change is computationally visible.”

Also: Don’t get scammed by fake ChatGPT apps: Here’s what to look out for

Intel’s deepfake detection technology is being implemented across various sectors and platforms, including social media tools, news agencies, broadcasters, content creation tools, startups, and nonprofits. By integrating the technology into their workflows, these organizations can better identify and mitigate the spread of deepfakes and misinformation.

Despite the potential for misuse, deepfake technology has legitimate applications. One of the early uses was the creation of avatars to better represent individuals in digital environments. Demir refers to a specific use case called “MyFace, MyChoice,” which leverages deepfakes to enhance privacy on online platforms. 

In simple terms, this approach allows individuals to control their appearance in online photos, replacing their face with a “quantifiably dissimilar deepfake” if they want to avoid being recognized. These controls offer increased privacy and control over one’s identity, helping to counteract automatic face-recognition algorithms.

Also: GPT-3.5 vs GPT-4: Is ChatGPT Plus worth its subscription fee?

Ensuring ethical development and implementation of AI technologies is crucial. Intel’s Trusted Media team collaborates with anthropologists, social scientists, and user researchers to evaluate and refine the technology. The company also has a Responsible AI Council, which reviews AI systems for responsible and ethical principles, including potential biases, limitations, and possible harmful use cases. This multidisciplinary approach helps ensure that AI technologies, like deepfake detection, serve to benefit humans rather than cause harm.

“We have legal people, we have social scientists, we have psychologists, and all of them are coming together to pinpoint the limitations to find if there’s bias — algorithmic bias, systematic bias, data bias, any type of bias,” says Dimer. The team scans the code to find “any possible use cases of a technology that can harm people.”

Also: 5 ways to explore the use of generative AI at work

As deepfakes become more prevalent and sophisticated, developing and implementing detection technologies to combat misinformation and other harmful consequences is increasingly important. Intel Labs’ real-time deepfake detection technology offers a scalable and effective solution to this growing problem. 

By incorporating ethical considerations and collaborating with experts across various disciplines, Intel is working towards a future where AI technologies are used responsibly and for the betterment of society.

READ MORE HERE