Response to the UK’s March 2023 White Paper "A pro-innovation approach to AI regulation"

Together with colleagues at the Centre for Global Law and Innovation of the University of Bristol Law School, I submitted a response to the UK Government’s public consultation on its ‘pro-innovation’ approach to AI regulation. For an earlier assessment, see here.

The full submission is available at https://ssrn.com/abstract=4477368, and this is the executive summary:

The white paper ‘A pro-innovation approach to AI regulation’ (the ‘AI WP’) claims to advance a ‘pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative’ model that leverages the capabilities and skills of existing regulators to foster AI innovation. This model, we are told, would be underpinned by a set of principles providing a clear, unified, and flexible framework improving upon the current ‘complex patchwork of legal requirements’ and striking ‘the right balance between responding to risks and maximising opportunities.’

In this submission, we challenge such claims in the AI WP. We argue that:

  • The AI WP does not advance a balanced and proportionate approach to AI regulation, but rather, an “innovation first” approach that caters to industry and sidelines the public. The AI WP primarily serves a digital industrial policy goal ‘to make the UK one of the top places in the world to build foundational AI companies’. The public interest is downgraded and building public trust is approached instrumentally as a mechanism to promote AI uptake. Such an approach risks breaching the UK’s international obligations to create a legal framework that effectively protects fundamental rights in the face of AI risks. Additionally, in the context of public administration, poorly regulated AI could breach due process rules, putting public funds at risk.

  • The AI WP does not embrace an agile regulatory approach, but active deregulation. The AI WP stresses that the UK ‘must act quickly to remove existing barriers to innovation’ without explaining how any of the existing safeguards are no longer required in view of identified heightened AI risks. Coupled with the “innovation first” mandate, this deregulatory approach risks eroding regulatory independence and the effectiveness of the regulatory regimes the AI WP claims to seek to leverage. A more nuanced regulatory approach that builds on, rather than threatens, regulatory independence is required.

  • The AI WP builds on shaky foundations, including the absence of a mapping of current regulatory remits and powers. This makes it near impossible to assess the effectiveness and comprehensiveness of the proposed approach, although there are clear indications that regulatory gaps will remain. The AI WP also presumes continuity in the legal framework, which ignores reforms currently promoted by Government and further reforms of the overarching legal regime repeatedly floated. It seems clear that some regulatory regimes will soon see their scope or stringency limited. The AI WP does not provide clear mechanisms to address these issues, which undermine its core claim that leveraging existing regulatory regimes suffices to address potential AI harms. This is perhaps particularly evident in the context of AI use for policing, which is affected by both the existence of regulatory gaps and limitations in existing legal safeguards.

  • The AI WP does not describe a full, workable regulatory model. Lack of detail on the institutional design to support the central function is a crucial omission. Crucial tasks are assigned to such central function without clarifying its institutional embedding, resourcing, accountability mechanisms, etc.

  • The AI WP foresees a government-dominated approach that further risks eroding regulatory independence, in particular given the “innovation first” criteria to be used in assessing the effectiveness of the proposed regime.

  • The principles-based approach to AI regulation suggested in the AI WP is undeliverable due to lack of detail on the meaning and regulatory implications of the principles, barriers to translation into enforceable requirements, and tensions with existing regulatory frameworks. The minimalistic legislative intervention entertained in the AI WP would not equip regulators to effectively enforce the general principles. Following the AI WP would also result in regulatory fragmentation and uncertainty and not resolve the identified problem of a ‘complex patchwork of legal requirements’.

  • The AI WP does not provide any route towards sufficiently addressing the digital capabilities gap, or towards mitigating new risks to capabilities, such as deskilling—which create significant constraints on the likely effectiveness of the proposed approach.

Full citation: A Charlesworth, K Fotheringham, C Gavaghan, A Sanchez-Graells and C Torrible, ‘Response to the UK’s March 2023 White Paper "A pro-innovation approach to AI regulation"’ (June 19, 2023). Available at SSRN: https://ssrn.com/abstract=4477368.