Uncle Sam warns deepfakes are coming for your brand and bank account

Deepfakes are coming for your brand, bank accounts, and corporate IP, according to a warning from US law enforcement and cyber agencies.

In a report published on Tuesday, the NSA, the FBI, and the government’s Cybersecurity and Infrastructure Security Agency (CISA) warned that threats from “synthetic media” pose a growing threat.

That is to say, criminals and spies are expected to use AI-generated material to gain access to systems by impersonating staff or hoodwink customers. Think: someone using machine-learning tools to pretend to be a CFO to transfer money out of a business account, or a CTO to make an IT support worker grant an intruder or rogue user admin privileges, or a CEO to tell customers to dump their products.

The Feds note targets for these kinds of shenanigans specifically include military, government employees, first responders using the national security systems, defense industrial base firms, and critical infrastructure owners and operators. 

And “synthetic media” is just what it sounds like — fake information and communications spanning text, video, audio, and images. 

As technology improves, it’s getting more difficult to tell the real deal from deepfake media that uses artificial intelligence and machine learning to produce highly realistic, believable messages and content. 

“The most substantial threats from the abuse of synthetic media include techniques that threaten an organization’s brand, impersonate leaders and financial officers, and use fraudulent communications to enable access to an organization’s networks, communications, and sensitive information,” Uncle Sam warned in a Cybersecurity Information Sheet [PDF].

While the Feds say that there’s only “limited indication” that state-sponsored criminals are using deepfakes they caution that the increasing availability of free deep-learning tools make it easier and cheaper to mass produce fake media. 

To this point, the government agencies cite the Eurasia Group’s list of top risks for 2023, which puts generative AI in the No. 3 spot. It’s a chilling read: “Resulting technological advances in artificial intelligence (AI) will erode social trust, empower demagogues and authoritarians, and disrupt businesses and markets.”

The US government’s concerns about synthetic media also includes disinformation operations designed to sow false information about political, social, military and economic issues, causing unrest and uncertainty.

We’ve seen examples of this already this year, both in America and abroad. 

In May, a fake image of an explosion near the Pentagon went viral after being shared by multiple verified Twitter accounts. In addition to causing general confusion, the AI-generated photo also prompted a brief dip in the stock market.

A month later, several Russian TV channels and radio stations were compromised and aired a deepfake video of Russian President Vladimir Putin declaring martial law. Of course, phony images and social media posts are also favored by Putin’s goons.

Criminals are also increasingly using fake media in attempts to defraud organizations for financial gain, according to the alert. As we said above, these typically deploy a combination of social engineering along with manipulated audio, video, or text to trick employees into transferring funds to attacker-controlled bank accounts.

Beware of CEOs asking for money

The FBI and friends cite two examples from May: in one, miscreants used synthetic visual and audio media techniques to impersonate the CEO of the company, calling a product line manager over WhatsApp and claiming to be the CEO.

“The voice sounded like the CEO and the image and background used likely matched an existing image from several years before and the home background belonging to the CEO,” the deepfake threat report says.

In another example, also from May, criminals used a combo of fake audio, video and text messages to impersonate a company exec, first over WhatsApp and then moving to a Teams meeting that appeared to show the executive in their office. “The connection was very poor, so the actor recommended switching to text and proceeded to urge the target to wire them money,” the Feds wrote. “The target became very suspicious and terminated the communication at this point.”

The Cybersecurity Information Sheet also includes several recommendations for spotting deepfakes, and not falling victim to these schemes. Safeguards include using deepfake detection and real-time verification technologies, and taking preventative measures such as making a copy of media and hashing both the original and the copy to verify an actual copy.

As always, verify the source and make sure that the message or media is coming from a reputable — and real — organization or person.

It’s also a good idea to have a plan in place to respond to and minimize potential damages caused by deepfakes. Create an incident response plan that details how security and other teams should respond to a variety of these techniques, and then run tabletop exercises to rehearse the plan. ®

READ MORE HERE