ARIA Chief Warns of Rapidly Advancing AI Systems

ARIA Chief Warns of Rapidly Advancing AI Systems

ARIA Chief Warns of Rapidly Advancing AI Systems

Matt Clifford, chief of the government’s Advanced Research and Invention Agency (ARIA), emphasized in an interview with a local U.K. media outlet that current systems are becoming “more and more capable at an ever-increasing rate.”

In two years, he continued, the systems will become “very powerful” if officials do not consider safety and regulations now.

Clifford cautioned that there are “many different types of risks” associated with AI, both in the short and long term, which he described as “pretty scary.”

The adviser to the AI task force stated that these threats posed by AI could be “hazardous” and “kill many humans, but not all humans, based on where we expect models to be in two years.”

“We’ve got two years to get in place a framework that makes both controlling and regulating these very large models much more possible than it is today.”

The adviser to the AI task force stated that these threats posed by AI could be “hazardous” and “kill many humans, but not all humans, based on where we expect models to be in two years.”

The interview followed the recent publication of an open letter by the Center for AI Safety, signed by 350 AI experts, including OpenAI CEO Sam Altman, stating that AI should be regarded as an existential threat comparable to nuclear weapons and pandemics.

“They’re talking about what happens once we effectively create a new species, sort of an intelligence that’s greater than humans.”

According to Clifford, the primary focus of regulators and developers should be comprehending how to control models and then implementing global regulations.

For the time being, he stated that his most significant concern is the lack of comprehension regarding why AI models behave as they do.

“The people who are building the most capable systems freely admit that they don’t understand exactly how [AI systems] exhibit the behaviors that they do.”

Clifford emphasized that many executives of organizations developing AI concur that powerful AI models must be subjected to an audit and evaluation procedure before deployment.

Regulators worldwide are scrambling to comprehend the technology and its implications while attempting to create regulations that safeguard users and permit innovation.

European Union officials suggested on June 5 that all content generated by artificial intelligence be labeled as such to prevent disinformation.

In the United Kingdom, a front-bench Labour Party member reiterated the sentiments expressed in the Center for AI Safety’s letter, stating that technology should be regulated similarly to medicine and nuclear power.

Read Previous

SEC Lawsuit Triggers Massive Outflows from Binance

Read Next

Coinbase Faces SEC Action