UN sets up advisory team to coordinate ‘inclusive’ AI governance
The United Nations (UN) has set up an advisory team to look at how artificial intelligence (AI) should be governed to mitigate potential risks, with a pledge to adopt a “globally inclusive” approach. The move comes amid new research that consumers neither trust businesses to adopt generative AI responsibly nor abide by regulations governing its use.
The new AI advisory body is multidisciplinary and will address issues regarding the international governance of AI, said UN Secretary-General António Guterres.
Also: Generative AI is everything, everywhere, all at once
The body currently comprises 39 members and includes representatives from government agencies, private organizations, and academia, such as the Singapore government’s chief AI officer, Spain’s secretary of state for digitalisation and AI, Sony Group’s CTO, OpenAI’s CTO, Stanford University’s international policy director of its cyber policy center, and China University of Political Science and Law’s professor of Institute of Data Law.
With the emergence of applications such as chatbots, voice cloning, and image generators during the past year, AI has demonstrated its ability to bring about significant possibilities, as well as potential dangers, Guterres noted.
“From predicting and addressing crises, to rolling out public health programs and education services, AI could scale up and amplify the work of governments, civil society, and the UN across the board. For developing economies, AI offers the possibility of leapfrogging outdated technologies and bringing services directly to people who need them most,” he said.
Also: Generative AI in commerce: 5 ways industries are changing how they do business
He added that AI also could help drive climate action and efforts to achieve the international group’s 17 sustainable development goals by 2030.
“But, all this depends on AI technologies being harnessed responsibly and made accessible to all, including the developing countries that need them most,” he said. “As things stand, AI expertise is concentrated in a handful of companies and countries. This could deepen global inequalities and turn digital divides into chasms.”
Pointing to concerns over misinformation and disinformation, Guterres said AI potentially could further entrench bias and discrimination, surveillance and privacy invasion, fraud, and other violations of human rights.
Also: Why companies must use AI to think differently, and not simply to cut costs
The new UN advisory team, hence, is necessary to drive discussions on AI governance and how the associated risks can be contained. The organization will also assess how various AI governance initiatives already underway can be integrated, he said, adding that the advisory body will be guided by values outlined in the UN Charter and efforts to be inclusive.
By year-end, he noted that preliminary recommendations in three areas will be ready, namely international governance of AI, shared understanding of risks and challenges, and enablers to tap AI in accelerating the delivery of sustainability goals.
Consumers lack trust in business AI adoption
There are questions, however, about whether rules governing the use of AI will be observed, even if they are deemed necessary.
Some 56% of consumers do not trust businesses to follow generative AI regulations, according to survey findings from tech consultancy Thoughtworks. The study polled 10,00 respondents across 10 markets, including Australia, Singapore, India, the UK, the US, and Germany. Each market had 1,000 respondents, all of whom were aware of generative AI.
Also: Machine learning helps this company deliver a better online shopping experience
Consumers’ lack of trust in business compliance is apparent even when 90% believe government regulations are necessary to hold organizations accountable for how they applied AI.
Some 93% of consumers are worried about the ethical use of generative AI, with 71% expressing concerns that businesses will use their data without consent. Another 67% are anxious about risks related to misinformation.
Also: Generative AI will far surpass what ChatGPT can do. Here’s everything on how the tech advances
Asked if they would buy from companies that used generative AI, 42% of consumers are more likely to, while 18% feel less inclined to do so.
Among those who are likely to purchase from generative AI adopters, 59% of consumers believe businesses can tap the technology to drive greater innovation, and 51% look for better customer experience with faster support from companies that do so.
Some 64% of consumers point to the lack of human touch as a reason they are less likely to purchase from businesses that use generative AI, while 48% cite data privacy concerns.
Across the board, 91% of consumers express concerns about data privacy, in particular, around how their information is used, accessed, and shared.
Also: I’m using ChatGPT to help me fix code faster, but at what cost?
“Consumers are savvy enough to recognize the potential for misuse of the technology, that could include privacy infringements, intellectual property infringements, job losses or deteriorating customer experiences,” said Mike Mason, Thoughtworks’ chief AI officer.
“At the heart of those fears is a concern enterprises won’t be transparent about their use of generative AI technology,” Mason said.
“For some consumers, government regulation is seen as the best means of mitigating against unscrupulous use of generative AI, but government regulation has inherent problems: too often we’ve seen regulators struggle to keep pace with technology.
Rather than depend on regulations, he urged businesses to lead the way and embrace generative AI in “a responsible manner” to capitalize on consumers’ enthusiasm for the technology.
Artificial Intelligence
READ MORE HERE