Attackers Turn To AI Generated YouTube Videos To Spread Info Stealers
Looking to take advantage of the belief that people generally trust human faces, threat actors are integrating AI-generated personas into YouTube videos to load stealer malware and launch phishing campaigns.
In a Monday blog post, CloudSEK reported observing a 200% to 300% month-on-month increase in the number of YouTube videos spreading stealer malware such as Vidar, RedLine and Raccoon since November 2022. The threat actors target the online video-sharing platform because of YouTube’s roughly 2.5 billion active monthly users.
Researchers for the contextual AI company said the videos lure users by pretending to be tutorials on how to download cracked versions of software only available to paid users such as Photoshop, Premiere Pro, Autodesk 3ds Max and AutoCAD. The threat actors use previous data leaks, phishing techniques and stealer logs to take over existing YouTube accounts.
The threat actors tend to target popular accounts with more than 100,000 subscribers so they can reach a large audience in a short time. Usually, subscribers to popular accounts are notified about a new upload, a tactic that lends itself to video legitimacy. However, many YouTubers report this unusual activity to YouTube and gain access back to their accounts within a few hours. The bad news: in a few hours, hundreds of users could have fallen prey.
Matthew Fulmer, manager of cyber intelligence engineering at Deep Instinct, said he compares this case to an advanced phishing campaign that’s very broad in scope.
“The use of AI to include digitally generated humans is an interesting touch, especially if they are generating them based on the generally accepted symmetry which makes people find them ‘attractive,’ and thus ‘soothing’ or ‘trusting,’” said Fulmer. “Threat actors can leverage this heavily and the tools they are now starting to leverage should worry security teams as the threat actors are advancing far more rapidly than most of the security teams.”
Timothy Morris, chief security advisor at Tanium, added that AI’s ability to create believable content via video, audio and text has upped the malware game, giving attackers better lures to attract and hook victims.
Morris said the videos with AI are just another info-stealer technique used to deceive and manipulate people by exploiting their appetite for a deal, or to get something for nothing.
“Remember that nothing is free,” said Morris. “If you’re being asked to download software, an extension/plug-in, or share data in hopes of receiving something of value, it’s you that has become the product and your data is the commodity.
“Sharing too much information exposes oneself to spearphishing and ransomware,” Morris continued. “And with the blurred lines between work and home, some info stealers are even geared towards siphoning Wi-Fi router data.”
READ MORE HERE