Deepfake Proofing The President: What Is Cryptographic Verification?

Deepfake audio robocalls impersonating President Joe Biden raised alarm among government officials last month, with an AI version of the head of state instructing voters in New Hampshire not to vote in the presidential primary.

With the rise of deceptive AI deepfakes ahead of the 2024 presidential election, the White House’s AI advisor has signaled efforts to authenticate official government statements using cryptographic methods.

Cryptographic verification is the method White House Special Advisor for AI Ben Buchanan said officials are “trying to explore” to combat deepfakes of the president and other White House officials, as stated in a recent interview with Business Insider.

Buchanan told Business Insider the White House is looking into “essentially cryptographically verifying our own communication so that when people see a video of the President on whitehouse.gov, they know this is a real video of the president and there’s some signature in there that does that.”

The government and voters are not the only targets of malicious deepfakes; AI imitations have already been used for years for financial fraud, in schemes that are only growing in volume and sophistication since the generative AI boom.

Data from iProov published last week showed a 704% jump in “face swap” deepfake fraud attempts against identity verification systems, with threat actors stepping up their game using virtual cameras, emulators and free or low-cost deepfake tools.

A finance worker from the Hong Kong branch of a multinational company was also tricked into sending the equivalent of $25 million to fraudsters this year after attending a conference call with multiple deepfakes of his colleagues.

With the growing need for stronger methods to authenticate video, images and audio amidst a sea of AI imitations, could cryptography be the key to deepfake-proofing media?

“The ability to establish and attest to the provenance of media tied to a real-world identity via cryptographic certainty is paramount,” Tim Brown, global identity officer at Prove, told SC Media.

Brown highlighted the Content Authenticity Initiative as another example of the efforts being made to push back against deepfake deception.

“The ability to connect the dots of content creation, coupled with a strongly bound identity, will go a long way to thwarting the flood of deep fake media that we will undoubtedly see over the next 12 months,” Brown said.

What is cryptographic verification and how could it be used to combat deepfakes?

In cryptographic verification, a private encryption key, or digital signature, is attached to a piece of content and its corresponding public key is made available to decrypt the signature. If the content is altered in some way by a third party, the signature will also be altered or removed; it can no longer be decrypted using a known public key, revealing its inauthenticity.

The use of cryptography for verifying the authenticity and source of digital content is already commonplace in many applications. For example, it is widely used in email security and software distribution, where cryptographic signatures help identify trusted sources and prevent the receipt of content that has been tampered with.

Cryptographic verification has also been used to meet more novel security needs, such as the verification of QR codes used by “vaccine passport” apps during the COVID-19 pandemic.

Although Buchanan provided few details on the White House’s plans to use cryptography to stem the flow of AI-driven misinformation, his statement suggests a tamper-proof private encryption key could be incorporated into future official video addresses by the president.

This wouldn’t prevent deepfakes from being created but could make them easier to detect and debunk using tools incorporating the public keys of the content in question.

Buchanan noted that this cryptographic solution would be a separate and “longer process” than efforts to encourage providers of generative AI to implement watermarking of AI-generated content. AI watermarking would involve the addition of difficult-to-remove embedded data into AI-generated content, making it more readily identifiable as an AI creation.

SC Media reached out to a White House press contact for more information about the use of cryptography against deepfakes and did not receive a response.

While public/private key cryptography can be a solution for public figures and content creators to verify their media, organizations will also see a need for cryptographic authentication of identity, as deepfake threat actors target businesses with imitations of employees.

“These types of ‘cryptographic engines’ or methods can be used in solutions or processes that can build assurance and trust in the interaction to prove ‘who you are,’” said Mary Ann Miller, fraud and cybercrime executive advisor and VP of client experience at Prove, in an email to SC Media.

“Additionally, the topic of mutual authentication is gaining attention with the advancement of Deep Fakes; companies that create solutions to provide both sides of an interaction confidence will be critical in the future,” Miller concluded.  

READ MORE HERE