Innovation and Accountability: AI Regulation in the EU, US, and UK

Innovation and Accountability: AI Regulation in the EU, US, and UK

The EU has moved fast on safety and transparency regulations for artificial intelligence, set to introduce strict rules for AI in scenarios deemed “high-risk”. The US, meanwhile, has been more hesitant — its major tech firms a counterweight to public calls for regulation. In the UK, the government points toward a “pro-innovation” approach to avoid stifling competition in AI.

The rapid rise of AI technologies has prompted governments worldwide to establish  robust regulation to ensure both ethical practices while fostering innovation. By understanding how these key global players approach AI regulation, it may be possible to gain some insight into the evolving landscape of AI governance and its impact on the future of technology and society.

The EU is set to introduce some of the world’s strictest safety and transparency regulations for AI. The EU Parliament approved a draft of the AI Act on June 14th. It aims to impose sweeping safety and transparency restrictions on AI technologies,  restricting the use of AI in scenarios considered “high-risk”. 

There will be a full ban on AI for biometric surveillance, emotion recognition, and predictive policing. Generative AI systems like ChatGPT must disclose that content has been AI-generated. The Act would also put in place rules banning AI use in autonomous vehicles, certain medical devices, and credit scoring systems.  

Violating restricted AI practices may result in substantial penalties, reaching as high as €40 million ($43 million) or an amount equivalent to up to 7% of a company’s global annual turnover, whichever is greater. “We have made history today,” Brando Benifei, a member of the European Parliament declared in a news conference. 

“The E.U. is poised to establish itself as a frontrunner in regulating artificial intelligence, but its role in driving A.I. innovation remains to be witnessed,” stated Boniface de Champris, the group’s Europe policy manager. “Europe’s fresh A.I. rules must adeptly tackle well-defined risks, while offering developers adequate flexibility to provide advantageous A.I. applications for the benefit of all Europeans.” 

While the EU aims to protect its citizens from discriminatory and harmful AI practices, questions have been raised over regulation going too far. Several top business leaders in Europe have expressed reservations, cautioning these rules may adversely impact the bloc’s competitiveness and affect investments. 

In an open letter addressed to EU lawmakers on Friday, C-suite executives from prominent companies like Siemens (SIEGY), Carrefour (CRERF), Renault (RNLSY), and Airbus (EADSF) raised “serious concerns” over the new Act. Figures from the tech industry including Yann LeCun, chief AI scientist of Meta (FB), and Hermann Hauser, founder of the British chipmaker ARM, who have joined other prominent signatories:

“In our assessment, the draft legislation would jeopardise Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing”

An Open Letter warning against over-regulation of AI in draft EU laws

The draft will undergo further discussions and negotiations with the Council of the European Union and EU member states. A few final steps are needed before the legislation becomes effective. 

Currently, AI regulation in the United States is in its early stages, lacking comprehensive federal legislation dedicated solely to AI. The US government has found itself somewhat between a rock and a hard place — aware of AI’s potential risks, but lobbied by powerful domestic tech firms not to take legislation too far. 

The US is adopting a generally permissive approach broadly described as risk-based, focused on specific sectors, and decentralised across federal agencies. Nevertheless, existing laws and regulations address certain aspects of AI, including privacy, security, and anti-discrimination. 

On May 16, the US Senate held a hearing on regulating AI. It announced an “updated roadmap” to focus federal investments in AI research and development, and acknowledged the request for public input on critical AI issues. A new report on the risks and opportunities related to AI in education was commissioned. 

Today, the Biden-Harris Administration is announcing new efforts that will advance the research, development, and deployment of responsible artificial intelligence (AI) that protects individuals’ rights and safety and delivers results for the American people.

White House Fact Sheet: ‘Biden-⁠Harris Administration Takes New Steps to Advance Responsible Artificial Intelligence…

In July, the Biden administration established a series of “voluntary” safety commitments from prominent industry players. These commitments encompass internal and third-party testing of AI products, aiming to ensure their resilience against cyberattacks and prevent misuse by “bad actors”. Seven companies signed up to this informal agreement (Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI). 

AI Commitments offer some accountability and can serve as a basis for future government actions aimed at ensuring the safe and reliable development of AI systems. 

Multiple federal agencies are actively involved in exploring AI policy and issuing guidelines. The US Copyright Office has determined that the majority of texts, images, and videos generated by AI systems cannot be copyrighted as original works — since they were not created by a human. 

LLM models such as GPT-4 and Stable Diffusion heavily rely on extensive training datasets that often contain copyrighted texts and images. This situation has resulted in numerous legal disputes. 

The United States government’s response to AI regulation has been slow and complex. Balancing the potential risks and benefits of AI, the government faces pressure from influential domestic tech firms to avoid overregulation. While the Biden administration has taken some steps towards advancing responsible AI, the current approach remains permissive and decentralised across federal agencies.

The establishment of voluntary safety commitments from major industry players is a positive step towards ensuring AI product resilience and preventing misuse. However, the absence of binding rules and unified federal approach still leaves room for improvement.

In the UK, the government has outlined a broadly libertarian approach to AI regulation described as a “Pro-innovation” stance. On 29th March 2023, the UK Government released a White Paper titled “A pro-innovation approach to AI regulation” (“White Paper”). In contrast to the EU, the UK does not intend to introduce new legislation or establish a dedicated AI regulator. The White Paper instead presents five fundamental principles that regulators are expected to on board to ensure proper conduct regarding AI across all industries.  

The five principles for AI regulation are: (1) Safety, Security, and Robustness — ensuring AI systems function safely and securely; (2) Transparency and Explainability — meaning that developers should communicate the purpose and workings of their AI systems; (3) Fairness — AI systems must not discriminate or foster unfair outcomes; (4) Accountability and Governance — effective oversight and clear responsibility; (5) Contestability and Redress — clear channels must be made available to dispute harmful AI outcomes or decisions.

I believe that a common-sense, outcomes-oriented approach is the best way to get right to the heart of delivering on the priorities of people across the UK. Better public services, high quality jobs and opportunities to learn the skills that will power our future – these are the priorities that will drive our goal to become a science and technology superpower by 2030. Artificial Intelligence (AI) will play a central part in delivering and enabling these goals, and this white paper will ensure we are putting the UK on course to be the best place in the world to build, test and use AI technology.

A pro-innovation approach to AI regulation

Michelle Donelan MP, Secretary of State for Science, Innovation and Technology

While these principles reflect a progressive approach to AI regulation, critics may argue that relying solely on voluntary guidelines might not be enough to address the rapidly evolving challenges presented by AI tech. There may be a need to establish more stringent regulations and clear enforcement mechanisms to ensure companies adhere to these principles effectively. Critics have raised concerns about the potential gaps in addressing novel AI applications or technologies that simply fall outside the scope of the existing principles.

Conclusion 

As the potential of AI continues to grow, so do the concerns surrounding its impact on society, privacy, and ethics. The EU’s decisive move to introduce strict safety and transparency regulations for high-risk AI scenarios shows a proactive approach to addressing potential risks. By imposing clear restrictions and guidelines, the EU aims to create an environment where innovation can thrive while ensuring that AI systems are developed and deployed responsibly, with due consideration for human rights and values. Time will tell if the UK holds firm or moves to emulate the EU more closely in the future.

The US faces unique challenges in AI regulation due to influential tech firms whose interests may conflict with robust regulations. Policymakers must balance industry concerns with social impact, striving for comprehensive federal legislation to provide clarity for businesses and consumers. Overcoming political obstacles is crucial to finding common ground that promotes innovation and public safety. Striking the right regulatory balance becomes even more critical as AI technologies become increasingly integrated into various aspects of our lives. 

Tags :