One of OpenAI’s founders, Ilya Sutskever, who attempted unsuccessfully to remove CEO Sam Altman, said that he is launching a safety-focused artificial intelligence startup.
Reputable AI researcher Sutskever, who quit ChatGPT last month, announced on social media on Wednesday that he and two co-founders had founded Safe Superintelligence Inc.
Safely building “superintelligence,” a term used to describe AI systems smarter than humans, is the company’s main objective.
In a prepared statement, Sutskever and his co-founders Daniel Gross and Daniel Levy pledged that the company would not be sidetracked by “management overhead or product cycles” and that work on safety and security would be “insulated from short-term commercial pressures” as part of their business plan.
With headquarters in Tel Aviv and Palo Alto, California, the three stated that Safe Superintelligence is an American business with “deep roots and the ability to recruit top technical talent.”
Sutskever was a member of the team that tried and failed to remove Altman from office the previous year.
Laptops 1000Sutskever subsequently expressed sorrow for the boardroom upheaval, which also sparked internal conflict at OpenAI over whether company executives were putting profit ahead of AI security.
Sutskever co-led a group at OpenAI that was tasked with creating artificial general intelligence, or AGI, a safer alternative to human intellect. He stated he was working on a “very personally meaningful” project when he left OpenAI, although he gave no specifics.
Sutskever stated that he decided to leave OpenAI on his own.
A few days after he left, Jan Leike, the other team leader, also quit and criticized OpenAI for allowing security to “take a backseat to shiny products.”
A safety and security committee was subsequently announced by OpenAI, but its members are primarily business insiders.