AI Pulse: The Good from AI and the Promise of Agentic

What Good is AI?

As a cybersecurity company, we at Trend Micro spend a lot of time in the ‘risk zone’: anticipating threats and figuring out how to neutralize them. But we appreciate it’s also important to take a step back and consider the tremendous power of AI to bring positive change to humanity.

This issue of AI Pulse looks at some of the collective action being taken to make AI safe and beneficial, including by strengthening cybersecurity through AI-enhanced vulnerability hunting and threat detection. And we consider how agentic AI will usher in whole new possibilities for making people’s lives better as our civilization continues down the road toward artificial general intelligence (AGI).

What’s New: AI as a Force for Good

In France, you can’t spell peace—“paix”—without AI
The Paris Peace Forum sees peace as more than just the absence of war: it’s a cooperative approach to tackling challenges such as climate change, migration, and cyber threats—big issues that affect all countries and have no regard for national borders. This year’s Forum on November 11–12 made it clear that cyber threats (and opportunities) don’t necessarily include AI. Sessions across the two days covered everything from responsible AI governance and bridging the AI divide to disinformation and women in AI.

Trend was there and has announced plans to partner with the Forum on developing guidance for secure AI adoption and implementation. Trend will also take part in the upcoming February AI Action Summit led by French President Emmanuel Macron, sharing insights into AI threats and how to meet them with advanced cybersecurity tools and practices.

Is agentic AI good for your health?
According to the MIT Technology Review, it very well could be. The journal recently ran a story about the potential of agent-driven AI to unlock breakthroughs in biology. The piece was notable for a couple of reasons. First, it asserted that AI agents’ unstructured, ‘generalist’ problem-solving approach stands to penetrate the complexity that has been a longtime barrier for human researchers. Second, the piece itself was written by an AI system—Owkin, branded as “the world’s first end-to-end AI biotech.”

The dual ability of AI to drive discovery and contribute to high-quality scientific communication has many observers excited. Of course, there are risks to manage: The Verge recently reported on inappropriate hallucinated passages cropping up in AI-generated medical transcripts; and Physics World cited a study that found certain computer-aided diagnoses were less accurate for some patients than others due to biased datasets. But in the balance, the promise of agentic AI to improve lives and boost general scientific literacy seems like a net positive and a good use of transformative technology.

Gossip on the AI grapevine
Every leap forward amplifies AI’s capacity to do good, and the rumor mill is churning right now with hints of new releases from three major players. OpenAI CEO Sam Altman has denied it but word on the street is that the company will launch a platform—possibly GPT-5, currently codenamed ‘Orion’—as soon as December. An exec fueling the rumors has teased that the new platform could be 100 times more powerful than GPT-4. According to The Verge, OpenAI’s goal is to get closer to artificial general intelligence (AGI) by combining its various LLMs (though some sources say the company has already achieved AGI internally).

Anthropic is working on the next iteration of its platform, Claude 3.5 Opus. CEO Dario Amodei is being similarly coy about a release date, but observers suspect to see something before the turn of the year. No specific enhancements have been announced, but “faster, better, smarter,” seems a fair expectation, along with further proof of Anthropic’s commitment to ethical AI development.

Google is also said to be preparing the launch of Gemini 2.0 before the end of the year. Again, details are scant, though TechRadar suggests “smarter answers, faster processing, support for longer inputs, and more reliable reasoning and coding” are likely. A little farther on the horizon is the next release of Meta’s Llama platform. A company spokesman has said Llama 4 can be expected early in the new year with enhancements that approach autonomous machine intelligence (AMI)—“perception, reasoning, and planning” via techniques such as chain-of-thought. It’s clear from these rumors that competition remains fierce, with AGI the prize everyone is ultimately after.

To be open or not open, that is the question
As the AI majors labor away on their new models, debate continues to swirl around the pros and cons of open-source AI. The con side worries open-source AI could easily be misused, which China’s co-opting of Llama 2 for military use suggests is a genuine risk. But those in the pro camp point to the success of open-source software in application spaces such as the internet and music streaming, pointing out that openness tends to produce stronger, more trustable solutions. A recent Economist opinion piece applauded the at least partial openness of today’s major AI models and urged governments to encourage open-source development through regulation and IP laws.

Trends in AI-enabled Cybersecurity

Read More HERE