Artificial Intelligence Takes RSA Conference By Storm
Artificial intelligence is poised to fuel cybersecurity advances, radically reduce response times to threats, tackle identity risks and supplant security teams and add so called “SOC helpers” to help identify malicious activity at super-human speed, according speakers opening up RSA Conference 2023.
Speaking to attendees at the Moscone Center during his opening remarks, Rohit Ghai, chief executive officer of RSA Security, said “every new technology wave is bigger, faster and more disruptive than all previous ones. This time is no different.”
For Ghai, AI is a cybersecurity hardening element for existing identity management technologies such as zero trust, credential management and something that will increasingly augment existing automation technologies with advance capabilities.
“Without good AI, zero trust has zero chance,” Ghai said.
He pointed to the emergence of a next-generation identity access management platform powered by AI. This identity fabric, as Gartner is already calling it, will soon be able to pool capabilities from multiple vendors creating one seamless solution, according Ghai.
Click here for all of SC Media’s coverage from the RSA Conference 2023
“The identity and access management platform is outdated,” Ghai said. “Access management and identity management are table stakes features. … Today, the core purpose of an identity platform is security. In the AI era, we need an identity security fabric.”
AI, he argued, will underpin the vast amount of heavy lifting needed to weave together the identity fabric to combat a growing number of AI-powered threats facing businesses.
AI invades RSAC 2023
Ghai said AI is here and the cybersecurity community must harness its power. This week, he said, nearly a dozen major vendors, including RSA Security, along with 50-plus startups, have announced AI-powered cybersecurity products. One of those products was Google Cloud Security AI Workbench, introduced Monday.
Ghai added, the industry can expect further disruption tied to AI as it permeates cybersecurity products and solutions.
“Over time, we must expect that many jobs will disappear, many will change and some will be created,” he said.
The job loss silver lining, Ghai said, was there isn’t enough human talent in cybersecurity in the first place. One of the tasks AI could potentially tackle is the sometimes menial and time-consuming task of investigating false positives.
Much to the pleasure of the crowd of cybersecurity professionals, Ghai continued, “[humanoid cybersecurity professionals] will still have an important role training, supervising, regulating ethics and monitoring AI for [tasks such as] air traffic control and designing flight plans. AI learns from the questions we ask. We will train AI by asking well specified, thoughtful questions. We will invent new AI models and algorithms.”
For Jeetu Patel, executive VP and GM of security and collaboration at Cisco, he envisioned AI as a conductor of sorts weaving different cybersecurity technologies into one harmonious symphony.
“We can’t just rely on isolated sets of defenses when what we need to have is a coordinated set of defenses,” Patel said. The MITRE ATT&CK framework, he cited as an example, doesn’t rely on a single weak link in a cybersecurity attack surface, but rather an exploit chain of weaknesses from email, malicious websites and malware downloads.
“[We have] extremely capable vendors in the market that do a great job at [protecting] each one of these domains. But what they don’t do is create harmony,” he said. “It’s just uncoordinated telemetry that creates more noise in the market. So, the question we have to ask ourselves is, what does the world need?”
He argued the industry needs a set of security defenses that are completely coordinated and synchronized. And what might help turn the cacophony into harmony? You guessed it: AI.
Is AI coming for your cybersecurity job?
Joining Patel on stage was Tom Gillis, senior VP and GM of CISCO’s security business group. He introduced the concept of an AI SOC helper to augment the SOC analysts job.
“The security operation center detects and responds to threats in real time. When an incident begins, time is critical, and SOC analysts are often swamped with rapid-fire information and tasks. To help with this, let’s imagine that our SOC analysts can rely on an AI security system which we’ll call AMES,” Gillis said.
“The beauty about this concept is that when you use AI to augment humans in the job, there’s amazing possibilities,” he said. For starters, AI can assist in shortening the time for investigation of false positives, identify malicious emails and find holes in a network attack surface. He said a demo, where artificial intelligence was layered on top of both a SEIM and an XDR platform represents a not-so-distant reality.
Next, Gillis presented a video of an AI SOC bot culling through a tsunami of potential malicious network telemetry and responding to queries in a computer voice eerily similar to HAL 9000. AI allowed natural language queries and responses that were based on a real-time analysis of network activity.
Patel said that AI generative models are already being used to focus on specific domains to be able to spot anomalies, alert security teams and mitigate attacks or vulnerabilities autonomously using AI rules.
He pointed out that the key to this AI revolution was a friendly user interface. He attributed the success of OpenAI’s ChatGPT to how easy it is to use and derive value. OpenAI’s userbase has ballooned to 100 million users in 60 days, dwarfing rising stars Facebook and Instagram that took over four years each to reach that user milestone.
The flip side of what RSA’s Ghai called “Good AI” bots is the existence of “Bad AI” bots.
“Cyberthreat actors will use AI tools on sophisticated phishing campaigns and create malicious GPT to compromise identity. Cybersecurity professionals will need to leverage AI to neutralize these threats,” Ghai said.
“Without Good AI, Bad AI will be used by the same cyberthreat actors that have been using automation for years,” he said. AI will be used to launched very sophisticated social engineering campaigns. Their phishing campaigns are already becoming more emotionally manipulative, compelling and use seductive language — without any grammatical errors. They are defeating MFA. We need Good AI on our side to sniff out these increasingly sophisticated campaigns launched by Bad AI.”
READ MORE HERE