Managing the “enormous” promise and risks posed by the technology, according to President Joe Biden will require new commitments from companies like Amazon, Google, Meta, Microsoft, and others that are leading the development of artificial intelligence technology. These commitments were made in response to a set of AI safeguards negotiated by his White House.
In a statement, Biden stated that his government has obtained voluntary agreements from seven American businesses to assure the safety of their AI technologies before they are released. Although several of the promises do not specify who will audit the technology or hold the businesses responsible, they call for third-party oversight of the operation of the next generation of AI systems.
“We must be clear-eyed and vigilant about the threats that emerging technologies can pose,” Biden said, adding that businesses have a “fundamental obligation” to make sure their goods are secure.
Biden continued, “Social media has shown us the damage that strong technology can cause without the proper controls in place. We still have a lot of work to do as a team, but these commitments are a good first step.
The public is fascinated by these tools as well as concerned about the hazards they pose, including the ability to deceive people and spread misinformation. This is due to a rise in commercial investment in generative AI tools that can produce impressively human-like text, fresh images, and other media.
The four internet behemoths have pledged to security testing “carried out in part by independent experts” to protect against major threats, such as biosecurity and cybersecurity, the White House said in a statement. They are also working with ChatGPT maker OpenAI, and startups Anthropic, and Inflection.
Additionally, that testing will look at more theoretical risks posed by powerful AI systems that may take control of physical systems or “self-replicate” by creating copies of themselves, as well as the potential for societal ills like bias and discrimination.
The firms have also agreed to procedures for disclosing system vulnerabilities and to the use of digital watermarking to aid distinguish between authentic and deep fake (AI-generated) images and sounds.
Biden and other officials met in private with executives from the seven corporations on Friday to receive their commitment to uphold the standards.
In an interview conducted following the White House event, Inflection CEO Mustafa Suleyman noted that the president “was very firm and clear” in his desire for the businesses to keep innovating while also “feeling that this needed a lot of attention.”
“It’s a big deal to bring all the labs together, all the companies,” said Suleyman, whose Palo Alto, California-based startup is the newest and smallest of the companies. We wouldn’t cooperate in this situation since it’s so competitive.
According to the agreement, the businesses will also openly disclose any dangers and defects in their technology, including any consequences for prejudice and fairness.
Prior to a longer-term campaign to persuade Congress to establish rules governing the technology, the voluntary agreements are intended to be a quick method to manage dangers. Biden’s action, according to some supporters of AI legislation, is a beginning, but more must be done to hold businesses and their products accountable.
Amba Kak, executive director of the AI Now Institute, claimed that a discussion with corporate actors that took place behind closed doors and resulted in voluntary protections was insufficient. We need a much broader public discussion, and that will raise concerns that businesses very definitely won’t agree to since doing so will produce materially different outcomes that would have a more immediate impact on their business models.
Suleyman said it is not an easy pledge to make to accept to participate in “red team” experiments that probe their AI systems.
“The commitment we’ve made to have red-teamers basically try to break our models, identify weaknesses, and then share those methods with the other large language model developers is a pretty significant commitment,” said Suleyman.
The Biden administration “and our bipartisan colleagues” are closely collaborating with Senate Majority Leader Chuck Schumer, D-New York, to build on the commitments made on Friday. Schumer has stated he will file legislation to regulate AI.
Technology executives have pressed for regulation, and some of them even showed up for a May White House gathering.
In a blog post published on Friday, Microsoft President Brad Smith stated that his business is making commitments beyond those made by the White House, including support for legislation that would establish a “licensing regime for highly capable models.”
As smaller firms are driven out of the market by the expensive expense of ensuring that their AI systems comply with regulatory requirements, some experts and upstart competitors are concerned that the type of regulation that is being floated could be a benefit for deep-pocketed first-movers led by OpenAI, Google, and Microsoft.
A number of nations have been considering how to govern AI, with EU lawmakers drafting broad AI regulations for the 27-nation union that may limit uses thought to have the greatest hazards.
The United Nations is “the ideal place” to create international norms, according to U.N. Secretary-General Antonio Guterres, who has appointed a group to make recommendations on possibilities for global AI governance by the end of the year.
Additionally, Guterres stated that he supported requests from certain nations to establish a new U.N. organization to help international efforts to regulate AI, drawing inspiration from organizations like the International Atomic Energy Agency and the Intergovernmental Panel on Climate Change.
The White House announced on Friday that it has held consultations with many nations over the voluntary commitments.
The pledge is primarily concerned with safety risks, but it doesn’t address other concerns about the most recent AI technology, such as how it will affect jobs and market competition, how much environmental resources will be needed to build the models, and copyright issues regarding the use of writings, artwork, and other human creations to teach AI systems how to produce content that is similar to that of humans.