Black Hat USA Keynote: In AI Do Not Trust
LAS VEGAS — Tired of the hype around artificial intelligence? Even keynote speakers here at Black Hat USA gave a nod to AI-topic fatigue. But they also warned the hype is warranted, given that today’s “AI language models are like troubled teenagers” that need supervision.
The observations came from Maria Markstedter, founder of Azeria Labs. The noted reverse engineering and exploitation expert kicked off Black Hat with an opening keynote on the promise and perils of AI. Top on her list was what she said was a likely emerging army of autonomous AI bots based on next-generation AI in rapid development today.
“We are involved in a corporate [AI] arms race, not driven by security and safety,” Markstedter said. “The technology world follows the credo ‘move fast break $%it.’ That’s been the model.” AI will follow the same trajectory as other first-generation technologies such as the first iPhone model — insecure and with an emergent patching process.
“Do you remember the first version of the iPhone? It was so insecure — everything was running as root. It was riddled with critical bugs. It lacked exploit mitigations or sandboxing,” she said. “That didn’t stop us from pushing out the functionality and for businesses to become part of that ecosystem.”
Markstedter said that today’s AI — dominated by the likes of OpenAI — is roughly equivalent to that first-generation tech. She called generative AI, based on language models, unimodal in that output (or answers) are based on text-based language models.
“AI is limited in its capability because it can only analyze one input at a time. It’s still pretty useful. But the concept of multimodal AI has been around for two years, and this year it is starting to take off,” she said.
A multimodal AI model pulls data from a bevvy of sources including text, audio and visual data sources. She warns the more data sources an AI system pulls from, the greater the risk that one of those data sources can be corrupted and impact the outcome to a malicious actor’s intent. If that weren’t problematic enough, Markstedter warned that a cottage industry has sprung up offering machine learning as a service.
“Companies want control over their own data. And they want to integrate these model capabilities into their own products and services and deploy them within their infrastructure,” she said.
The ultimate goal, and where Markstedter said she envisioned the next big AI push, is autonomous AI agents that are able to process multimodal data inputs to generate consequential outcomes.
“Experimental AI agents are starting to pop up. We can already see people fumbling around with these agents. It’s all fun and games, but these AI agents are turning into real business-use cases,” she said.
Where is this headed? AI agents will live in two worlds; one a businesses proprietary well-curated language model. The second will be the public internet — where AI agents will need to pull real-time or third-party data to execute nondeterministic business outcomes.
“How do we know if these outcomes can be trusted?” she asked.
She challenged the cybersecurity community to develop decompiling and reverse-engineering tools similar to Ghidra, IDA and Hex-Ray for AI.
“We’re entering a world where we need to threat model an autonomous system having access to our business data apps with the authorization to perform nondeterministic actions,” she said. “We could argue that these agents should and will stay subservient to humans. But the truth is that the usefulness of the systems will be bound to the ability to be autonomous.”
Wrestling security challenges tied to this future will force the a wholesale reexamination of access management and data security.
“So where do we go from here? What I want you to take away from this is that AI systems and their use cases and capabilities are becoming more powerful. Second, we need to take the possibility of autonomous AI agents becoming a reality within our enterprise seriously. We need to rethink our concepts of identity access management in a world of truly autonomous systems having access to our apps.”
She closed out her keynote reframing the quote “change is the only constant in life.”
“It’s been a few years since a technology has disrupted our current state of security as much as this one. So, we need to learn about the very technology that’s changing our systems and our threat models in order to address these emerging problems. Technological changes are new, but to us tech is always evolving. That part of security isn’t new.”
As part of the Black Hat USA opening keynote Perri Adams, program manager for DARPA’s Information Innovation Office, made a surprise announcement of an AI Cyber Challenge (AIxCC), a two-year competition that aims to drive AI innovation and create a new generation of AI-based cybersecurity tools. See our previous Black Hat coverage of the DARPA news.
READ MORE HERE