The Face-recognition system on Facebook will be turned off and data will be deleted.

The Face-recognition system on Facebook will be turned off and data will be deleted.

In response to rising worries about the technology and its misuse by governments, police, and others, Facebook said that it will shut down its face-recognition system and wipe the faceprints of more than 1 billion individuals.

In a blog post on Tuesday, Jerome Pesenti, vice president of artificial intelligence for Facebook’s new parent company, Meta, said, “This transition will constitute one of the largest shifts in facial recognition usage in the technology’s history.”

He said the corporation was weighing the technology’s positive applications “against mounting societal concerns, especially since regulators have yet to issue clear restrictions.” ” “More than a billion people’s individual facial recognition templates” will be deleted in the next weeks, he said.

The about-face by Facebook comes after a hectic few weeks. It unveiled its new name, Meta, for the corporation, but not the social network, on Thursday. The move, it claims, would allow it to focus on developing technology for the “metaverse,” which it envisions as the next incarnation of the internet.

The corporation is also dealing with what may be its worst public relations crisis to date after records disclosed by whistleblower Frances Haugen revealed that it was aware of the problems caused by its products but did little or nothing to ameliorate them.

More than a third of Facebook’s daily active users have agreed to have their faces recognized by the platform. This equates to approximately 640 million individuals. Face recognition was first deployed by Facebook more than a decade ago, but as the company faced criticism from courts and authorities, it progressively made it easier to opt-out of the tool.

In 2019, Facebook ceased automatically detecting people in photographs and suggested that they be “tagged,” and instead allowed users to choose whether or not they wanted to utilize its facial recognition tool.

According to Kristen Martin, a professor of technology ethics at the University of Notre Dame, Facebook’s choice to shut down its system “is a solid example of attempting to make product decisions that are good for the customer and the corporation.” She went on to say that the change underscores the power of public and regulatory pressure, given that the facial recognition system has been criticized for more than a decade.

Facebook’s parent firm, Meta Platforms Inc., appears to be exploring new methods of identifying people. According to Pesenti, the announcement on Tuesday is part of a “company-wide shift away from broad identification and toward specific kinds of personal authentication.”

“When the technology functions discreetly on a person’s own devices,” he added, “facial recognition can be very helpful.” “Today, the systems used to unlock smartphones most typically use this type of on-device facial recognition, which requires no transmission of face data with an external server.”

Face ID, Apple’s technique for unlocking iPhones, is powered by this technology.

Researchers and privacy advocates have questioned the internet industry’s use of face-scanning software for years, citing studies that revealed it worked unevenly across racial, gender, and age lines. One issue is that the technology could misidentify those with a darker complexion.

Another issue with face recognition is that it requires companies to create unique faceprints of large numbers of people – often without their consent and in ways that can be used to fuel tracking systems, according to Nathan Wessler of the American Civil Liberties Union, which has fought Facebook and other companies over their use of the technology.

“This is a huge step forward in acknowledging that this technology is fundamentally harmful,” he said.

Last year, Facebook found itself on the losing end of the dispute when it asked that facial recognition startup ClearviewAI, which collaborates with law enforcement, stop mining Facebook and Instagram user photographs in order to identify the persons in them.

Concerns have also grown as more people become aware of the Chinese government’s vast video monitoring system, which has been deployed in a region with a strong Muslim ethnic minority population.

Facebook’s massive database of photographs posted by users has aided in the advancement of computer vision, a field of artificial intelligence. Many of those research teams have now been redirected to Meta’s augmented reality goals, in which the business envisions future consumers donning spectacles to enjoy a blend of virtual and actual worlds. As a result of these technologies, there may be additional issues regarding how biometric data is acquired and tracked.

When asked how consumers could verify that their image data was erased and what the business will do with its underlying face-recognition technology, Facebook gave vague replies.

On the first issue, business spokesperson Jason Grosse wrote in an email that if users’ face-recognition settings are turned on, their templates will be “tagged for deletion,” and that the deletion procedure will be finished and validated in the “coming weeks.” On the second point, Facebook will “switch off” components of the system related to the face-recognition settings, according to Grosse.

Other U.S. tech companies such as Amazon, Microsoft, and IBM decided last year to cease or pause their sales of facial recognition software to police, citing worries about false identifications and amid a broader awakening in the United States about policing and racial inequality.

Concerns about civil rights violations, racial bias, and invasion of privacy have led at least seven states and almost two dozen localities in the United States to restrict government use of the technology.

In October, President Joe Biden’s science and technology office launched a fact-finding mission to investigate facial recognition and other biometric capabilities used to identify people or assess their emotional, mental, or character states. European legislators and regulators have also made steps to prevent law enforcement from scanning people’s faces in public places.

Face-scanning practices contributed to Facebook’s $5 billion punishment and privacy limits imposed by the Federal Trade Commission in 2019. Facebook agreed to a deal with the FTC that includes a requirement to provide “clear and prominent” notice before using face recognition technology on people’s images and videos.

In addition, the corporation agreed to pay $650 million earlier this year to resolve a 2015 lawsuit alleging that it violated an Illinois privacy statute by using photo-tagging without consumers’ authorization.

“It’s a significant issue, it’s a big movement,” said John Davisson, senior counsel at the Electronic Privacy Information Center. “But it’s also far, far too late.” EPIC filed its first complaint with the FTC in 2011; a year after Facebook’s facial recognition feature was launched.

Facebook20.00k
Twitter60.00k
100.00k
Instagram500.00k
600.00k