UN URGES MORATORIUM ON USE OF AI THAT IMPERILS HUMAN RIGHTS

Editors' Choice News Desk Tech World

Wed 15 September 2021:

Michelle Bachelet, the UN High Commissioner for Human Rights, has called for a moratorium on the spread of all artificial intelligence (AI) systems that endanger human rights, according to the UN High Commissioner for Human Rights’ office.

“UN High Commissioner for Human Rights Michelle Bachelet on Wednesday stressed the urgent need for a moratorium on the sale and use of artificial intelligence (AI) systems that pose a serious risk to human rights until adequate safeguards are put in place.

She also called for AI applications that cannot be used in compliance with international human rights law to be banned,” the statement said.

A report published by Bachelet’s office on Wednesday found that AI now reaches into almost every corner of physical, mental spaces and has a profound impact on the course of people’s lives.

“AI systems are used to determine who gets public services, decide who has a chance to be recruited for a job, and of course they affect what information people see and can share online,” she said.

Her comments came with a new U.N. report that examines how countries and businesses have rushed into applying AI systems that affect people’s lives and livelihoods without setting up proper safeguards to prevent discrimination and other harms.

She didn’t call for an outright ban of facial recognition technology, but said governments should halt the scanning of people’s features in real time until they can show the technology is accurate, won’t discriminate and meets certain privacy and data protection standards.

While countries weren’t mentioned by name in the report, China in particular has been among the countries that have rolled out facial recognition technology — particularly as part of surveillance in the western region of Xinjiang, where many of its minority Uyghers live.

The report also voices wariness about tools that try to deduce people’s emotional and mental states by analyzing their facial expressions or body movements, saying such technology is susceptible to bias, misinterpretations and lacks scientific basis.

“The use of emotion recognition systems by public authorities, for instance for singling out individuals for police stops or arrests or to assess the veracity of statements during interrogations, risks undermining human rights, such as the rights to privacy, to liberty and to a fair trial,” the report says.

The report’s recommendations echo the thinking of many political leaders in Western democracies, who hope to tap into AI’s economic and societal potential while addressing growing concerns about the reliability of tools that can track and profile individuals and make recommendations about who gets access to jobs, loans and educational opportunities.

_____________________________________________________________________________

FOLLOW INDEPENDENT PRESS:

TWITTER (CLICK HERE)
https://twitter.com/IpIndependent

FACEBOOK (CLICK HERE)
https://web.facebook.com/ipindependent

Think your friends would be interested? Share this story!

 

Leave a Reply

Your email address will not be published. Required fields are marked *