U.K. Prime Minister Rishi Sunak claimed that the risks posed by AI are as dangerous as a nuclear war. His warning came as he called on other world leaders to work together to regulate the rising AI industry.
As of writing, he is urging the United States, Canada, the European Union, and Japan to secure an agreement that will help them combat the risks of AIs.
UK PM Rishi Sunak Says AI Risks as Dangerous as Nuclear War
According to The Guardian's latest report, Rishi Sunak shared his warning on Thursday, Nov. 2.
He said that he's hoping to secure an agreement with world others on how they can work together to test AI tools before they are released.
He explained that some people in the tech industry, including those working on artificial intelligence, are aware of the risks artificial intelligence poses since they themselves have raised alarms.
"There's debate about this topic. People in the industry themselves don't agree and we can't be certain," he explained.
But, the U.K. prime minister said that there's a case that world leaders should all believe in.
"But there is a case to believe that it may pose a risk on a scale like pandemics and nuclear war, and that's why, as leaders, we have a responsibility to act to take the steps to protect people, and that's exactly what we're doing," said Sunak.
Read Also : NMA Claims AI Developers Illegally Scraping Copyrighted News; Media Group Explains To Prevent Them
Sunak Says AI Firms Should Not Do the Checking Themselves
BBC News reported that Rishi Sunak told world leaders that AI companies should not be left to "mark their own homework."
This means that the British prime minister doesn't allow these tech firms to do the checking of their products themselves.
Sunak admitted that artificial intelligence tools can be beneficial for the National Health Service, schools, and other organizations.
However, he clarified that these AI products should be tested by governments to ensure that the tools are keeping citizens safe from any risk.
He suggested that governmental regulators and other external people should be the ones testing new AI tools before the are released to the market.