At this week’s annual high-level United Nations meeting, world leaders and diplomats will address several significant and intricate global issues, including artificial intelligence.
ChatGPT’s introduction almost three years ago marked the beginning of the AI boom, and since then, the world has been in awe of the technology’s astounding potential.
Even as scientists warn of the perils of artificial intelligence (AI), including existential threats such as planned pandemics, widespread disinformation, or rogue AIs going out of control, tech corporations have rushed to develop stronger AI systems.
The most recent and significant attempt to control AI is the U.N.’s adoption of a new governance framework.
Prior international initiatives, such as the three AI summits hosted by South Korea, France, and the United Kingdom, have only yielded non-binding commitments.
In a historic step to guide global governance efforts for the technology, the General Assembly last month passed a resolution to establish two key entities related to AI: an independent scientific panel of experts and a global forum.
An open discussion on the matter will be held at a U.N. Security Council meeting on Wednesday.
Among the issues to be discussed is how the Council can promote peace processes and conflict avoidance while ensuring the proper use of AI in accordance with international law.
Additionally, U.N. Secretary-General António Guterres will host a meeting to introduce the forum, known as the Global Dialogue on AI Governance, on Thursday as part of the body’s annual gathering.
It serves as a forum for governments and “stakeholders” to exchange ideas and solutions while talking about international collaboration. Formal meetings are planned for Geneva next year and New York in 2027.
To select 40 experts for the scientific panel, including two co-chairs—one from a developed country and one from a developing country—recruitment is anticipated to begin in the interim.
The panel has compared the U.N.’s climate change panel and its premier annual COP summit.
“A symbolic triumph” is what the new bodies stand for. In a blog post, Isabella Wilkinson, a research fellow at the London-based think tank Chatham House, stated that they are “by far the world’s most globally inclusive approach to governing AI.”
“However, it appears that the new mechanisms will be largely ineffective in practice,” she continued.
Whether the U.N.’s sluggish administration can govern a rapidly evolving technology like artificial intelligence is one of the potential problems.
To ensure AI is implemented by the end of next year, a group of specialists demanded that nations agree on so-called red lines ahead of the summit.
They stated that the technology requires “minimum guardrails” intended to prevent the “most urgent and unacceptable risks.”
The group, which includes top executives from Google’s AI research center, DeepMind; OpenAI, the manufacturer of ChatGPT; and Anthropic, a chatbot manufacturer, wants states to sign an internationally enforceable AI accord.
They point out that the world has already agreed upon treaties prohibiting biological weapons and nuclear testing and protecting the high seas.
One of the supporters, Stuart Russell, a professor of computer science and the director of the Center for Human Compatible AI at the University of California, Berkeley, described the concept as “very simple.”
“We can demand that developers demonstrate safety as a requirement of market access, just like we do with medications and nuclear power plants.”
Russell proposed that the International Civil Aviation Organization, another U.N.-affiliated organization that collaborates with safety regulators in various nations to ensure they are all aligned, may serve as a model for U.N. governance.
Additionally, he suggested that instead of establishing a set of unchangeable regulations, diplomats could create a “framework convention” that is adaptable enough to be modified to account for the most recent developments in AI.