Vastly intelligent artificial intelligence (AI) systems may have the capability to “kill humans” within the next two years, UK Prime Minister Rishi Sunak’s adviser on AI has warned. Setting up the doomsday clock, Matt Clifford, who is currently constituting a government AI taskforce, said policymakers from across the globe need to work together to control the technology, which otherwise could have devastating consequences
“You can have really very dangerous threats to humans that could kill many humans, not all humans, simply from where we’d expect models to be in two years time,” said Clifford.
He said humans should be prepared for threats ranging from cyberattacks to the creation of bioweapons if AI is allowed its way.
“The kind of existential risk that I think the letter writers were talking about is…what happens once we effectively create a new species, you know an intelligence that is greater than humans.”
“If we try and create artificial intelligence that is more intelligent than humans and we don’t know how to control it, then that’s going to create a potential for all sorts of risks now and in the future…it’s right that it should be very high on the policymakers’ agendas.”
Quizzed if what percentage chance he gave to the hypothesis that humanity could be wiped out by AI, Clifford said: “I think it is not zero.”
The letter calling for AI ban
Earlier in March, an open letter by the Future of Life Institute, a thinktank, signed by Elon Musk and Steve Wozniak amongst others took a very cautious approach to the next generation of AI. The letter cited 12 pieces of research from experts that included former employees of OpenAI, Google as well as DepMind.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter stated.
“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” it added.
The letter calling for AI ban
Earlier in March, an open letter by the Future of Life Institute, a thinktank, signed by Elon Musk and Steve Wozniak amongst others took a very cautious approach to the next generation of AI. The letter cited 12 pieces of research from experts that included former employees of OpenAI, Google as well as DepMind.
“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable,” the letter stated.
“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” it added.
AI could eviscerate humanity
Musk, who co-founded OpenAI which is now run by Microsoft has been one of the most vocal voices demanding that AI research be regulated. In an interview, Musk stated that AI has the potential to destroy an entire civilisation.
“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production. In the sense that it has the potential, however small one may regard that probability, but it is non-trivial – it has the potential of civilisation destruction.”
Last week, 350 AI experts including OpenAI’s CEO Sam Altman conceded that there was a risk in the longer term that the technology could lead to the extinction of humanity.
(With inputs from agencies)