Artificial intelligence (AI) researchers are urging the government to "urgently rethink" its present recommendations for regulating AI in the UK.
There are now 18 proposals in a recent report by the Ada Lovelace Institute that summarizes the UK's plans for new artificial intelligence (AI) regulations.
This report revealed that there are very few legal protections for private persons to seek redress when AI goes wrong and makes a discriminating judgment, according to The Evening Standard.
This comes after a study by the same organization in June that surveyed 4,000 UK adults and found that 62 percent wanted laws and regulations to govern the use of AI technologies, 59 percent wanted a clear process for appealing a decision made by AI, and 54% wanted "clear explanations of how AI works."
The first AI safety conference will be held in the UK this fall, and UK Prime Minister Rishi Sunak is eager to host it. He will seek out bilateral support during the meeting to advance AI legislation.
If draft legislation, like the Data Protection and Digital Information Bill, which is presently in the House of Lords, is not amended, the researchers fear that public protections would deteriorate in the future.
The researchers prefer that the AI safety summit include representatives from a range of social groups, not simply politicians.
Doubts Over Success of International Agreements
The researchers expressed concern that, among other reasons, it is doubtful that international agreements will be successful in averting harm and making AI safer unless they are supported by "robust domestic regulatory frameworks" capable of influencing corporate incentives and AI developer behavior in particular.
The report also emphasizes the need to steer clear of "speculative" claims about AI systems and to take solace in the knowledge that any harms can be avoided by collaborating more closely with AI developers as they create new products, as opposed to freaking out about "existential risks" like the notion that AI could wipe out humanity in just two years.
Read also: Biden Administration Launches Measures for Addressing AI Challenges
The UK's Regulations on AI
The UK government has taken a proactive approach to regulating AI and has published a number of documents setting out its plans.
In 2018, the government published a National AI Strategy, which set out the government's vision for AI in the UK. The strategy included a commitment to developing a regulatory framework for AI that is "proportionate, effective, and agile".
In 2023, the government published a White Paper on Artificial Intelligence, which set out more detailed proposals for regulating AI.
The White Paper proposed a new regulatory framework that would focus on the safety, fairness, and accountability of AI systems. The framework would be based on a risk-based approach, with more stringent requirements for high-risk AI systems.
The UK's approach to AI regulation is seen as being progressive and forward-looking. The government's focus on safety, fairness, and accountability is in line with international best practices, and the risk-based approach is likely to be effective in ensuring that AI systems are used safely and responsibly.
Related article: Google, EU in Productive Talks on AI Regulation