Brad Smith, the president of Microsoft, stated on Thursday that deep fakes—realistic-appearing but fake content—were his main concern regarding artificial intelligence.
Smith called for measures to ensure that people can tell when a photo or video is real and when it is generated by AI, potentially for malicious purposes, in a speech in Washington intended to address the issue of how to regulate AI, which went from wonky to widespread with the arrival of OpenAI’s ChatGPT.
“We need to address the problems with deep fakes. The types of actions already being carried out by the Russian government, the Chinese government, and the Iranian government are what we worry about most when it comes to foreign cyber influence operations, and this is something we’ll have to specifically address, he said.
We must take precautions to guard against the modification of valid content with the intention of misleading or defrauding users through the use of AI.
Additionally, Smith urged licensing of the most important types of AI with “obligations to protect the security, physical security, cybersecurity, and national security.”
To guarantee that these models are not stolen or utilized in ways that would violate the nation’s export control rules, he added, “We will need a new generation of export controls, at least the evolution of the export controls we currently have.”
Even as big and small businesses scramble to commercialize increasingly versatile AI, politicians in Washington have been debating what rules to approve to regulate AI for weeks.
In his first appearance before Congress, Sam Altman, CEO of OpenAI, the company that created ChatGPT, said last week that the use of AI to tamper with election integrity is a “significant area of concern” and that it needs to be regulated.
Microsoft is a supporter of Altman’s OpenAI, which has called for international AI cooperation and financial incentives for safety compliance.
In addition, Smith argued in the speech and in a blog post published on Thursday that people must be held responsible for any AI-related issues, and he urged lawmakers to ensure that safety brakes are applied to AI used to manage the water supply, the electric grid, and other crucial infrastructure so that humans retain control.
He advocated the implementation of a “Know Your Customer”-style mechanism for powerful AI model makers to monitor how their technology is used and to notify the general audience of what material AI is making so they can recognize fraudulent films.
Some legislation being explored on Capitol Hill would concentrate on AI that might endanger people’s lives or means of subsistence, including in banking and medicine. Others are promoting regulations to guarantee that AI is not used to discriminate against or violate human rights.