AI COULD CAUSE ‘NUCLEAR-LEVEL’ CATASTROPHE, THIRD OF EXPERTS SAY

Most Read Tech

Sat 15 Apr 2023:

More than one-third of researchers believe artificial intelligence (AI) could lead to a “nuclear-level catastrophe”, according to a Stanford University survey, underscoring concerns in the sector about the risks posed by the rapidly advancing technology.

The survey is among the findings highlighted in the 2023 AI Index Report, released by the Stanford Institute for Human-Centered Artificial Intelligence, which explores the latest developments, risks and opportunities in the burgeoning field of AI.

“These systems demonstrate capabilities in question answering, and the generation of text, image, and code unimagined a decade ago, and they outperform the state of the art on many benchmarks, old and new,” the report’s authors say.

ARTIFICIAL INTELLIGENCE ‘COULD BE’ DANGER TO SOCIETY, BIDEN SAYS

“However, they are prone to hallucination, routinely biased, and can be tricked into serving nefarious aims, highlighting the complicated ethical challenges associated with their deployment.”

The report, which was released earlier this month, comes amid growing calls for regulation of AI following controversies ranging from a chatbot-linked suicide to deepfake videos of Ukrainian President Volodymyr Zelenskyy appearing to surrender to invading Russian forces.

Last month, Elon Musk and Apple co-founder Steve Wozniak were among 1,300 signatories of an open letter calling for a six-month pause on training AI systems beyond the level of Open AI’s chatbot GPT-4 as “powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable”.

ARTIFICIAL INTELLIGENCE COULD REPLACE 300 MILLION JOBS WORLDWIDE, SAYS REPORT

In the survey highlighted in the 2023 AI Index Report, 36 percent of researchers said AI-made decisions could lead to a nuclear-level catastrophe, while 73 percent said they could soon lead to “revolutionary societal change”.

The survey heard from 327 experts in natural language processing, a branch of computer science key to the development of chatbots like GPT-4, between May and June last year, before the release of Open AI’s ChatGPT in November took the tech world by storm.

In an IPSOS poll of the general public, which was also highlighted in the index, Americans appeared especially wary of AI, with only 35 percent agreeing that “products and services using AI had more benefits than drawbacks”, compared with 78 percent of Chinese respondents, 76 percent of Saudi Arabian respondents, and 71 percent of Indian respondents.

The Stanford report also noted that the number of “incidents and controversies” associated with AI had increased 26 times over the past decade.

Government moves to regulate and control AI are gaining ground.

ARTIFICIAL INTELLIGENCE CARRIES A HUGE UPSIDE. BUT POTENTIAL HARMS NEED TO BE MANAGED

China’s Cyberspace Administration this week announced draft regulations for generative AI, the technology behind GPT-4 and domestic rivals like Alibaba’s Tongyi Qianwen and Baidu’s ERNIE, to ensure the technology adheres to the “core value of socialism” and does not undermine the government.

The European Union has proposed the “Artificial Intelligence Act” to govern which kinds of AI are acceptable for use and which should be banned.

US public wariness about AI has yet to translate into federal regulations, but the Biden administration this week announced the launch of public consultations on how to ensure that “AI systems are legal, effective, ethical, safe, and otherwise trustworthy”.

________________________________________________ 

FOLLOW INDEPENDENT PRESS:

TWITTER (CLICK HERE) 
https://twitter.com/IpIndependent 

FACEBOOK (CLICK HERE)
https://web.facebook.com/ipindependent

Think your friends would be interested? Share this story!

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *