CONTROVERSIAL FACIAL RECOGNITION TOOL THAT IDENTIFIES EMOTIONS IS RETIRED BY MICROSOFT

Editors' Choice Most Read News Desk Tech

Thu 23 June 2022:

Amid the debate over the controversial facial recognition technology, Microsoft has announced to restrict public access to several of its AI-powered facial analysis tools, including retiring facial analysis tools that purport to infer emotional states and identity gender, age, smile, facial hair, hair and makeup.

The tech giant said that it will not provide open-ended API (application programming interface) access to technology that can scan people’s faces and purport to infer their emotional states based on their facial expressions or movements.

The decision has been taken as part of Microsoft’s ‘Responsible AI Standard’, a framework to guide how it builds AI systems.

“AI is becoming more and more a part of our lives, and yet, our laws are lagging behind. They have not caught up with AI’s unique risks or society’s needs,” said Natasha Crampton, Chief Responsible AI Officer at Microsoft.

“While we see signs that government action on AI is expanding, we also recognise our responsibility to act. We believe that we need to work towards ensuring AI systems are responsible by design,” she said in a statement.

Microsoft also introduced similar restrictions to its Custom Neural Voice feature, which lets users create AI voices based on recordings of real people.

Building upon what Microsoft learned from Custom Neural Voice, it will apply similar controls to its facial recognition services.“After a transition period for existing customers, we are limiting access to these services to managed customers and partners, narrowing the use cases to pre-defined acceptable ones, and leveraging technical controls engineered into the services,” the company announced.

Microsoft said it will stop offering these features to new customers from June 21, while existing customers will have their access revoked on June 30, 2023.

“As part of our work to align our ‘Azure Face’ service to the requirements of the Responsible AI Standard, we are also retiring capabilities that infer emotional states and identity attributes such as gender, age, smile, facial hair, hair and makeup,” the company added.

The company collaborated with internal and external researchers to understand the limitations and potential benefits of this technology and navigate the tradeoffs.

“In the case of emotion classification specifically, these efforts raised important questions about privacy,” said Sarah Bird Principal Group Product Manager, Azure AI.

SOURCE: INDEPENDENT PRESS AND NEWS AGENCIES

___________________________________________________________________________________________________________________________________________

FOLLOW INDEPENDENT PRESS:

TWITTER (CLICK HERE) 
https://twitter.com/IpIndependent 

FACEBOOK (CLICK HERE)
https://web.facebook.com/ipindependent

Think your friends would be interested? Share this story!

Leave a Reply

Your email address will not be published. Required fields are marked *