AI Pulse: Top AI Trends from 2024 – A Look Back

Putting people first: New measures in the EU
In March, the EU approved an Artificial Intelligence Act to ensure safety, fundamental human rights, and AI innovation. The Act bans specific applications that threaten human rights, such as using biometrics to “categorize” people, building facial recognition databases from the internet and CCTV footage, and using AI for social scoring, predictive policing, or human manipulation.

That was followed in December by the EU’s Cyber Resilience Act, which requires digital product manufacturers, software developers, importers, distributors, and resellers to design-in cybersecurity features such as incident management, data protection, and support for updates and patches. Product makers must also address any vulnerabilities as soon as they’re identified. Violations can result in high-cost penalties and sanctions.

Also in December, the EU updated its Product Liability Directive (PLD) to include software—unlike other jurisdictions such as the U.S. that don’t see software as a ‘product’. This makes software companies liable for damages if their solutions are found to contain defects that cause harm, including, by implication, AI models.

Born in the USA: AI regulation stateside
The back half of the year was busy at the federal level in the U.S., with the White House issuing its very first National Security Memorandum on AI in October. The memo called for “concrete and impactful steps” to:

  1. Ensure U.S. leadership in the development of safe, trustable AI
  2. Advance U.S. national security with AI
  3. Drive international agreements on AI use and governance

In November, the National Institute of Standards and Technology (NIST) formed a taskforce— Testing Risks of AI for National Security (TRAINS)—to deal with AI’s national security and public safety implications. TRAINS members represent the Departments of Defense, Energy, and Homeland Security as well as the National Institutes of Health and will facilitate coordinated assessment and testing of AI models in areas of national security concern such as radiological, nuclear, chemical, and biological security, cybersecurity, and more. 

Also in November, the Departments of Commerce and State co-convened the International Network of AI Safety Institutes for the first time, focusing on synthetic content risks, foundation model testing, and advanced AI risk assessment.

Across the equator: AI regs in Latin America
Most Latin American countries have taken steps to deal with AI risks while embracing its potential. According to White & Case, Brazil and Chile are among those with the most detailed proposals while others, such as Argentina and Mexico, have come at the issue more generally. Some are focused on mitigating risks, either through prohibitions or regulatory constraints, while others see opportunity in taking a freer approach that invites innovation and international investment.

Know Thy Enemy: AI and Cyber Risk

To regulate AI, it’s important to know what the risks actually are. In 2024, OWASP, MIT, and others dedicated themselves to the task of identifying and detailing AI vulnerabilities.

OWASP’s LLM chart-toppers
The Open Worldwide Application Security Project (OWASP) unveiled its 2025 Top 10 Risk List for LLMs. Back again are old chestnuts like prompt injection risks, supply chain risks, and improper output handling. New additions include vector and embedding weaknesses, misinformation, and unbounded consumption (a blow-out of the previous DoS risk category).

OWASP expanded its concerns about ‘excessive agency’ largely because of the rise in semi-autonomous agentic architectures. As OWASP puts it, “With LLMs acting as agents or in plug-in settings, unchecked permissions can lead to unintended or risky actions, making this entry more critical than ever.”

Read More HERE