Security Pressures Mount Around AI’s Promises & Peril
BLACK HAT USA – Las Vegas – Friday, Aug. 11 – Artificial intelligence (AI) is not a newcomer to the tech world, but as ChatGPT and similar offerings push it beyond lab environments and use cases like Siri, Maria ‘Azeria’ Markstedter, founder and CEO of Azeria Labs, said that security practioners need to be on alert for how its evolution will affect their daily realities.
Jokingly she claimed that AI is “now in safe hands of big technology companies racing against time to compete to be safe from elimination,” in the wake of OpenAI releasing its ChatGPT model while other companies held back. “With the rise of ChatGPT, Google’s peace time approach was over and everyone jumped in,” she said, speaking from the keynote stage at Black Hat USA this week.
Seeing Where the Money Is Going
Companies are investing millions of dollars of funding into AI, but whenever the world shifts towards a new type of technology, “corporate arms races are not driven by concern for safety or security, as security slows down progress.”
She said the use cases to integrate AI are evolving, and it is starting to make a lot of money, especially those who dominate the market. However, there is a need for “creators to break it, and fix it, and ultimately prevent the technology in its upcoming use cases to blow up in our faces.”
She added that companies may be experiencing a bit of irrational exuberance. “Every business wants to be an AI business sample machine right now and the way that our businesses are going to leverage these tools to integrate AI will have significant impact on our threat model,” she said. However, the rapid adoption of AI means that its effect on the entire cyber-threat model remains an unknown.
Rise of ChatGPT Threats
Acknowledging that ChatGPT was “pretty hard to escape over the last nine months,” Markstedter said the skyrocketing increase in users led to some companies limiting access to it. Enterprises were skeptical, she said, as OpenAI is a black box, and anything you feed to ChatGPT will be part of the OpenAI data set.
She said: “Companies don’t want to leak their sensitive data to an external provider, so they started banning employees from using ChatGPT for work, but every business still wants to, and is even pressured to, augment their workforce products and services with AI; they just don’t trust sensitive data to … external providers that can make part of the data set.”
However, the intense focus and fast pace of development and integration of OpenAI will force security practitioners to evolve quickly.
“So, the way our organizations are going to use these things is changing pretty quickly: from something you check with for the browser, to something businesses integrate to their own infrastructure, to something that will soon be native to our operating system and mobile device,” she said.
The Opportunity for Industry
Markstedter said the biggest problem for AI and cybersecurity is that we don’t have enough people with the skills and knowledge to assess these systems and create the guardrails that we need. “So there are already new job flavors emerging out of these little challenges,” she said.
Concluding, Markstedter highlighted four takeaways: First, that AI systems and their use cases and capabilities are evolving; second, that we need to take the possibility of autonomous AI agents becoming a reality within our enterprise seriously; third, is that we need to rethink our concepts around identity and apps; and fourth, we need to rethink our concepts around data security.
“So we need to learn about the very technology that’s changing our systems and our threat model in order to address these emerging problems, and technological changes aren’t new to us,” she said. “We have no manuals to tell us how to fix our previous problems. We are all self-taught in one way or another, and now our industry attracts creative minds with an entire mindset. So we know how to study new systems and find creative ways to break them.”
She concluded by saying that this is our chance to reinvent ourselves, our security posture, and our defenses. “For the next danger of security challenges, we need to come together as a community and foster research into this areas,” she said.
Read More HERE