AI Pulse: Sticker Shock, Rise of the Agents, Rogue AI
Fakeouts at the checkout
Fraud continues to be front-and-center among the threat trends we’re tracking for AI Pulse. Even without technologies like agentic AI, scammers’ capabilities are advancing by leaps and bounds. While it’s hard to say where AI is being used without staring over an attacker’s shoulders, it’s clearly playing a role. The volume and sophistication of phishing and business email compromise (BEC) attacks have both increased, suggesting some kind of capacity boost. Trend Micro’s sensor protection network is picking up artifacts that appear to have been created by generative AI, and fake domains and websites also seem to be using the natural language and multimodal content creation capabilities of LLMs.
Just a couple of weeks ago, ConsumerAffairs posted an article complete with a “spot the fake Amazon page” quiz on how millions of shoppers are getting duped into phony transactions using websites that look legitimate. The piece notes the use of dirt-cheap “phish kits” criminals can use to launch readymade scam sites with ease. It also cites a Memcyco study that found four fake Amazon sites in a single scan, adding that in 2023 Amazon spent more than $1.2 billion and had a team of 15,000 people focused on fraud prevention.
10,000+ athletes. 200+ countries. 140+ cyberattacks.
As predicted in our June AI Pulse, the summer’s Olympic Games in Paris did indeed see a wave of cyberattacks—more than 140, all told. According to Anssi, the French cybersecurity agency, the Games themselves weren’t affected. Government organizations and sporting, transportation, and telecommunications infrastructure were the main targets, with a third of attacks causing downtime and half of those attributed to denials-of-service.
The most notable incident was a ransomware attack on the Grand Palais—which did have a role in hosting the Games—and dozens of French museums, though the agency said Olympics-related information systems weren’t affected.
Swimming in a sea of AI slop and slime
We wrote a fair bit about deepfakes in our first issue of AI Pulse, but they’re not the only kind of potentially harmful AI-generated content. The Washington Post ran a mid-August story about the proliferation of images on X created with the AI image generator, Grok. While there’s nothing inherently malicious about Grok, the article raised concerns about its lack of guardrails, citing posts featuring Nazi imagery.
AI is also responsible for growing volumes of nuisance content known colloquially as ‘slop’—material that’s meant to look like it was made by people and blurs the lines between legitimate, valuable content and misleading, time-wasting junk. Worse still is the outright slime of misinformation, which a spring 2024 study found is too often and too easily regurgitated by AI chatbots. According to the NewsGuard website, 10 leading chatbots when put to the test repeated false Russian information about a third of the time, raising concerns about sources of truth for perspective-seekers in a high-stakes election year.
What’s next with agentic AI
Please, not another @*#&%*!! chatbot!
Claims that GenAI is failing to deliver on its promise are shortsighted to say the least, predicated on the confusion that LLMs are somehow the be-all and end-all of artificial intelligence. They’re not. If interest has plateaued—which the big players’ second-quarter results seem to show—all it proves is that the world doesn’t need another chatbot. The real demand is for what’s coming next: adaptive problem solving.
That capability won’t just come from building bigger LLMs. It requires a deeper solution engineering approach and the development of compound or composite systems.
Divide and conquer
As the name suggests, composite systems have multiple components that work together to perform higher-order tasks than a single statistical model can manage on its own. The hierarchical version of agentic AI captures some of this concept by engaging and coordinating multiple agents to do different things in pursuit of a common goal.
According to the Berkeley Artificial Intelligence Research (BAIR) team, composite systems are easier to improve, more dynamic than single LLMs, and have more flexibility in terms of meeting user-specific performance (and cost) goals. They are also arguably more controllable and trustable, since instead of relying on ‘one source of truth’ (a lone LLM), composite outputs can be filtered and vetted by other components.
The ultimate version of a composite system would be an AI mesh of agents that interact both inside the system itself and also ‘talk’ to other agents across organizational boundaries.
Swinging the cloud pendulum
InfoWorld points out that forms of agentic AI already exist in phone-based personal assistants to cars and household environmental controls. As organizations adopt these kinds of technologies, many are choosing to adapt their approach to infrastructure—blending on-premises, on-device, and in-cloud AI for flexibility and performance. Agents can be expected to pop up everywhere from wearable devices to laptops and data centers. Creating enclaves of safety and trust in this interconnected ecosystem requires attention as the AI mesh builds up.
Caging the bear
Moving to a composite framework in which LLMs and agents communicate and work together will lead to better AI outcomes. But as noted above, keeping this kind of AI safe and secure requires a ‘bigger boat’ or, in the words of AI guru Yoshua Bengio, a ‘better cage for the AI bear’. Basically, this comes down to a question of alignment: is the AI system fulfilling its desired objectives or pursuing undesired ones? (This is complicated by a further question: whose objectives, and which ones are most desirable?)
Today it seems there’s more push to get to the next level of AI reasoning than there is to build security into AI. That needs to change or we will end up with rogue AI models that seek to achieve goals we didn’t give them—and aren’t in our best interest.
More agentic AI perspective from Trend Micro
Check out these additional resources:
Read More HERE