In an open letter highlighting potential hazards to society and mankind, Elon Musk and a group of artificial intelligence specialists and business executives are urging a six-month halt to the development of systems more potent than OpenAI’s recently released GPT-4.
The fourth version of OpenAI’s GPT (Generative Pre-trained Transformer) AI program, which has wowed users with its wide range of applications, including engaging users in human-like conversation, song composition, and document summarization, was unveiled earlier this month. OpenAI is supported by Microsoft.
More than 1,000 people, including Musk, signed the letter from the nonprofit Future of Life Institute, which urged a moratorium on the creation of advanced artificial intelligence (AI) until standardized safety guidelines for such systems were created, put into place, and independently audited.
In the letter, it was said that “powerful AI systems should only be developed if we are convinced that their impacts will be positive and their hazards will be manageable.”
A request for comment from OpenAI was not immediately reacted to.
The letter outlined the dangers posed to society and civilization by human-competitive AI systems, including the potential for political and economic upheaval, and urged developers to collaborate with regulators and legislators on governance and regulatory frameworks.
Emad Mostaque, the CEO of Stability AI, DeepMind researchers, Yoshua Bengio, who is sometimes described to as one of the “godfathers of AI,” and Stuart Russell, a pioneer in the field of AI research, were co-signatories.
The Future of Life Institute is principally sponsored by the Musk Foundation, together with the Silicon Valley Community Foundation and the London-based effective altruism group Founders Pledge, according to the European Union’s transparency register.
The worries come as the EU police agency Europol on Monday joined a chorus of moral and legal worries over cutting-edge AI, like ChatGPT, and issued a warning about the potential for the system to be abused in phishing scams, disinformation campaigns, and crimes.
The UK government announced suggestions for an “adaptable” legal framework for artificial intelligence in the meanwhile.
Instead of establishing a new body specifically for the technology, the government’s strategy, as stated in a policy document released on Wednesday, would divide responsibility for regulating artificial intelligence (AI) among its regulators for human rights, health and safety, and competition.
Transparency
Musk has been outspoken about his reservations about AI, which is being used in an autopilot system by his company, Tesla.
Since its debut in 2017, OpenAI’s ChatGPT has inspired competitors to expedite the creation of analogous big language models and businesses to use generative AI models in their products.
Last week, OpenAI revealed that it had teamed up with about a dozen businesses to integrate their services into its chatbot, enabling ChatGPT customers to place grocery orders through Instacart or make travel arrangements through Expedia.
According to a spokesman for Future of Life, Sam Altman, CEO of OpenAI, has not signed the letter.
Gary Marcus, a professor at New York University who signed the letter, stated that while it wasn’t flawless, “the letter’s ethos is correct: we need to slow down until we better grasp the repercussions.” The major parties are acting with greater secrecy, which makes it difficult for society to protect itself from potential dangers. Opponents asserted that statements regarding the technology’s current capabilities had been significantly overblown and charged that the letter’s signatories were pushing “AI hype.”
“These comments aim to create excitement. According to Johanna Bjorklund, an associate professor and AI researcher at Ume”OYen University, it is intended to cause individuals to get anxious. “I don’t believe that the handbrake needs to be pulled,” She suggested that increased openness standards for AI researchers be put in place rather than pausing development. “You should be very clear about how you conduct AI research,” the author advised.