New White House AI Initiatives Include AI Software-Vetting Event at DEF CON
The White House this week announced new actions to promote responsible AI innovation that will have significant implications for cybersecurity.
The actions are meant to address the spectrum of concerns around AI including its economic impact and its potential for discrimination. But the administration’s steps emphasized the cyber-risks of artificial intelligence.
Most notably, the White House has organized the nation’s leading developers for an event at the upcoming AI Village at DEF CON 31 in August, in which their algorithms will be exposed to rigorous vetting from the public.
“It’s drawing awareness,” says Chenxi Wang, head of Rain Capital. “They’re basically saying: ‘Look, the trustworthiness of AI is now a national security issue.'”
Actions Towards Cyber-Secure AI
More than any prior administration, the Biden-Harris White House has spoken out about and designed policies to contain AI.
October brought the “Blueprint for an AI Bill of Rights,” and associated executive actions. In January, The National Science Foundation mapped out a plan for a National Artificial Intelligence Research Resource, which is now coming to fruition. In March, the National Institute of Standards and Technology (NIST) released its AI Risk Management Framework.
The new AI policies make clear that, among all the other risks, cybersecurity must be top of mind when thinking about AI.
“The Administration is also actively working to address the national security concerns raised by AI, especially in critical areas like cybersecurity, biosecurity, and safety,” the White House announcement read. “This includes enlisting the support of government cybersecurity experts from across the national security community to ensure leading AI companies have access to best practices, including protection of AI models and networks.”
Of course, saying is one thing and doing is another. To mitigate the cyber-risk in AI, The National Science Foundation will be funding seven new National AI Research Institutes that, among other areas, will provide research in the field of AI cybersecurity.
DEF CON AI Village Event
The administration said it has “independent commitment” from some of the nation’s leading AI companies “to participate in a public evaluation of AI systems, consistent with responsible disclosure principles” at DEF CON 31. Those participating include Anthropic, Google, Hugging Face, Microsoft, Nvidia, OpenAI, and Stability AI.
The aim will be to shine a light on the proverbial black box, revealing those algorithmic kinks that enable racial discrimination, cybersecurity risk, and more. “This will allow these models to be evaluated thoroughly by thousands of community partners and AI experts,” the White House explained. “Testing of AI models independent of government or the companies that have developed them is an important component in their effective evaluation.”
“All these public initiatives — DEF CON, research centers, and so on — really draw attention to the problem,” Wang says. “We need to be really aware of how to assess AI, and whether to, in the end, trust the outcomes from those models or not.”
Looming AI Threats
Middling hackers have already pounced on AI, with auto-generated YouTube videos that spread malware, phishing attacks mimicking ChatGPT, malware developed through ChatGPT, and plenty more creative methods.
But the real problem with AI is far grander, and more existentially threatening to the future of a safe Internet. AI may one day enable hackers — or even those without technical skill — to spread malware at scales never before seen, according to experts. It will enable evildoers to design more compelling phishing lures, more advanced, adaptable malware, even entire attack chains. And as it becomes further integrated into every part of everyday life for civilians and organizations alike, our benign AI systems will expand the cyberattack surface beyond its already bloated state.
The potential for harm hardly ends there, either.
“In my opinion,” Wang says, “the biggest threat is misinformation. Depending on what data you collect in training your model, and how robust the model is, it can lead to serious use of misinformation in decision-making, and other bad outcomes that could have long-lasting impacts.”
Can the government even begin to address this problem?
Wang believes so. “The minute you put money and contract values behind an initiative, it has teeth,” she says, citing the particular influence of the Office of Management and Budget (OMB). As part of May 4’s news, the OMB revealed that it will be releasing draft policy guidance on the use of AI within the government.
“Once OMB announces their policies,” she continues, “everybody who is selling into the federal government, who may have AI in their products or technologies, will have to adhere to those policies. And then that will become a regular practice across the industry.”
“So,” she concludes, “I’m very hopeful.”
Read More HERE