Artificial intelligence may get dangerous due to “societal misalignments,” OpenAI CEO warns.

Artificial intelligence may get dangerous due to “societal misalignments,” OpenAI CEO warns.

The CEO of OpenAI, the company that makes ChatGPT, stated on Tuesday that the risks associated with AI keep him up at night and are caused by “very subtle societal misalignments” that might allow the systems to cause chaos.

In a video address to the World Governments Summit in Dubai, Sam Altman reaffirmed his demand for the establishment of an organization analogous to the International Atomic Energy Agency to supervise artificial intelligence, which is probably developing more quickly than most people realize.

There are certain situations when it’s simple to see where things go wrong. Furthermore, I’m not all that interested in killer robots strolling down the street if something goes wrong, Altman remarked. “I’m much more interested in the very subtle societal misalignments where things just go wrong in society with these systems in place and no malice intended.”

Laptops 1000

But Altman emphasized that when it comes to establishing industry laws, the AI sector, including OpenAI, shouldn’t be in charge.

We’re still having a lot of conversations. Thus, throughout the world, conferences are being held. It’s acceptable that everyone has ideas and policy papers, according to Altman. “I believe that debate is still necessary and healthy at this point, but eventually, I believe that we need to move towards an action plan that has genuine support on a global scale.”

One of the pioneers in the artificial intelligence space is the San Francisco-based startup OpenAI. Microsoft has made billion-dollar investments in OpenAI. The New York Times, meanwhile, has filed a lawsuit against Microsoft and OpenAI for using its content to train its chatbots without getting consent.

Because of OpenAI’s success, Altman has become the public face of generative AI’s quick commercialization and the anxieties around its potential applications.

There are indications of that possibility in the UAE, an authoritarian federation made up of seven sheikhdoms headed by hereditary families. Speech is still strictly regulated. These limitations have an impact on the reliable information flow, which is what machine learning systems and AI programs like ChatGPT depend on for their answers to users.

Laptops 1000

The Emirates is also home to the influential national security adviser of the nation, who is in charge of the Abu Dhabi company G42. According to specialists, G42 possesses the most advanced artificial intelligence model in the world for the Arabic language. Because of its connections to a mobile app that has been classified as spyware, the corporation has been accused of spying. It has also been accused of gathering genetic material for the Chinese government in secret from Americans.

Due to worries from the United States, G42 said that it would sever connections with Chinese suppliers. But none of the regional issues were discussed during the conversation with Altman, which was facilitated by Omar al-Olama, the UAE’s Minister of State for Artificial Intelligence.

For his part, Altman expressed his satisfaction with the way schools are embracing AI as essential to the future, unlike in the past when educators were concerned that pupils would use it to create papers. He did, however, note that AI is still in its infancy.

“I believe the cause is that the technology we have now is comparable to that first black-and-white cellphone,” Altman stated. Give us some time, then. Still, I believe that things will improve significantly in the coming years compared to now. And it ought to be quite noteworthy in ten years.

Facebook20k
Twitter60k
100k
Instagram500k
600k