Some thoughts on the US' Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI

On 30 October 2023, President Biden adopted the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the ‘AI Executive Order’, see also its Factsheet). The use of AI by the US Federal Government is an important focus of the AI Executive Order. It will be subject to a new governance regime detailed in the Draft Policy on the use of AI in the Federal Government (the ‘Draft AI in Government Policy’, see also its Factsheet), which is open for comment until 5 December 2023. Here, I reflect on these documents from the perspective of AI procurement as a major plank of this governance reform.

Procurement in the AI Executive Order

Section 2 of the AI Executive Order formulates eight guiding principles and priorities in advancing and governing the development and use of AI. Section 2(g) refers to AI risk management, and states that

It is important to manage the risks from the Federal Government’s own use of AI and increase its internal capacity to regulate, govern, and support responsible use of AI to deliver better results for Americans. These efforts start with people, our Nation’s greatest asset. My Administration will take steps to attract, retain, and develop public service-oriented AI professionals, including from underserved communities, across disciplines — including technology, policy, managerial, procurement, regulatory, ethical, governance, and legal fields — and ease AI professionals’ path into the Federal Government to help harness and govern AI. The Federal Government will work to ensure that all members of its workforce receive adequate training to understand the benefits, risks, and limitations of AI for their job functions, and to modernize Federal Government information technology infrastructure, remove bureaucratic obstacles, and ensure that safe and rights-respecting AI is adopted, deployed, and used.

Section 10 then establishes specific measures to advance Federal Government use of AI. Section 10.1(b) details a set of governance reforms to be implemented in view of the Director of the Office of Management and Budget (OMB)’s guidance to strengthen the effective and appropriate use of AI, advance AI innovation, and manage risks from AI in the Federal Government. Section 10.1(b) includes the following (emphases added):

The Director of OMB’s guidance shall specify, to the extent appropriate and consistent with applicable law:

(i) the requirement to designate at each agency within 60 days of the issuance of the guidance a Chief Artificial Intelligence Officer who shall hold primary responsibility in their agency, in coordination with other responsible officials, for coordinating their agency’s use of AI, promoting AI innovation in their agency, managing risks from their agency’s use of AI …;

(ii) the Chief Artificial Intelligence Officers’ roles, responsibilities, seniority, position, and reporting structures;

(iii) for [covered] agencies […], the creation of internal Artificial Intelligence Governance Boards, or other appropriate mechanisms, at each agency within 60 days of the issuance of the guidance to coordinate and govern AI issues through relevant senior leaders from across the agency;

(iv) required minimum risk-management practices for Government uses of AI that impact people’s rights or safety, including, where appropriate, the following practices derived from OSTP’s Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework: conducting public consultation; assessing data quality; assessing and mitigating disparate impacts and algorithmic discrimination; providing notice of the use of AI; continuously monitoring and evaluating deployed AI; and granting human consideration and remedies for adverse decisions made using AI;

(v) specific Federal Government uses of AI that are presumed by default to impact rights or safety;

(vi) recommendations to agencies to reduce barriers to the responsible use of AI, including barriers related to information technology infrastructure, data, workforce, budgetary restrictions, and cybersecurity processes;

(vii) requirements that [covered] agencies […] develop AI strategies and pursue high-impact AI use cases;

(viii) in consultation with the Secretary of Commerce, the Secretary of Homeland Security, and the heads of other appropriate agencies as determined by the Director of OMB, recommendations to agencies regarding:

(A) external testing for AI, including AI red-teaming for generative AI, to be developed in coordination with the Cybersecurity and Infrastructure Security Agency;

(B) testing and safeguards against discriminatory, misleading, inflammatory, unsafe, or deceptive outputs, as well as against producing child sexual abuse material and against producing non-consensual intimate imagery of real individuals (including intimate digital depictions of the body or body parts of an identifiable individual), for generative AI;

(C) reasonable steps to watermark or otherwise label output from generative AI;

(D) application of the mandatory minimum risk-management practices defined under subsection 10.1(b)(iv) of this section to procured AI;

(E) independent evaluation of vendors’ claims concerning both the effectiveness and risk mitigation of their AI offerings;

(F) documentation and oversight of procured AI;

(G) maximizing the value to agencies when relying on contractors to use and enrich Federal Government data for the purposes of AI development and operation;

(H) provision of incentives for the continuous improvement of procured AI; and

(I) training on AI in accordance with the principles set out in this order and in other references related to AI listed herein; and

(ix) requirements for public reporting on compliance with this guidance.

Section 10.1(b) of the AI Executive Order establishes two sets or types of requirements.

First, there are internal governance requirements and these revolve around the appointment of Chief Artificial Intelligence Officers (CAIOs), AI Governance Boards, their roles, and support structures. This set of requirements seeks to strengthen the ability of Federal Agencies to understand AI and to provide effective safeguards in its governmental use. The crucial set of substantive protections from this internal perspective derives from the required minimum risk-management practices for Government uses of AI, which is directly placed under the responsibility of the relevant CAIO.

Second, there are external (or relational) governance requirements that revolve around the agency’s ability to control and challenge tech providers. This involves the transfer (back to back) of minimum risk-management practices to AI contractors, but also includes commercial considerations. The tone of the Executive Order indicates that this set of requirements is meant to neutralise risks of commercial capture and commercial determination by imposing oversight and external verification. From an AI procurement governance perspective, the requirements in Section 10.1(b)(viii) are particularly relevant. As some of those requirements will need further development with a view to their operationalisation, Section 10.1(d)(ii) of the AI Executive Order requires the Director of OMB to develop an initial means to ensure that agency contracts for the acquisition of AI systems and services align with its Section 10.1(b) guidance.

Procurement in the Draft AI in Government Policy

The guidance required by Section 10.1(b) of the AI Executive Order has been formulated in the Draft AI in Government Policy, which offers more detail on the relevant governance mechanisms and the requirements for AI procurement. Section 5 on managing risks from the use of AI is particularly relevant from an AI procurement perspective. While Section 5(d) refers explicitly to managing risks in AI procurement, given that the primary substantive obligations will arise from the need to comply with the required minimum risk-management practices for Government uses of AI, this specific guidance needs to be read in the broader context of AI risk-management within Section 5 of the Draft AI in Government Policy.

Scope

The Draft AI in Government Policy relies on a tiered approach to AI risk by imposing specific obligations in relation to safety-impacting and rights-impacting AI only. This is an important element of the policy because these two categories are defined (in Section 6) and in principle will cover pre-established lists of AI use, based on a set of presumptions (Section 5(b)(i) and (ii)). However, CAIOs will be able to waive the application of minimum requirements for specific AI uses where, ‘based upon a system-specific risk assessment, [it is shown] that fulfilling the requirement would increase risks to safety or rights overall or would create an unacceptable impediment to critical agency operations‘ (Section 5(c)(iii)). Therefore, these are not closed lists and the specific scope of coverage of the policy will vary with such determinations. There are also some exclusions from minimum requirements where the AI is used for narrow purposes (Section 5(c)(i))—notably the ‘Evaluation of a potential vendor, commercial capability, or freely available AI capability that is not otherwise used in agency operations, solely for the purpose of making a procurement or acquisition decision’; AI evaluation in the context of regulatory enforcement, law enforcement or national security action; or research and development.

This scope of the policy may be under-inclusive, or generate risks of under-inclusiveness at the boundary, in two respects. First, the way AI is defined for the purposes of the Draft AI in Government Policy, excludes ‘robotic process automation or other systems whose behavior is defined only by human-defined rules or that learn solely by repeating an observed practice exactly as it was conducted’ (Section 6). This could be under-inclusive to the extent that the minimum risk-management practices for Government uses of AI create requirements that are not otherwise applicable to Government use of (non-AI) algorithms. There is a commonality of risks (eg discrimination, data governance risks) that would be better managed if there was a joined up approach. Moreover, developing minimum practices in relation to those means of automation would serve to develop institutional capability that could then support the adoption of AI as defined in the policy. Second, the variability in coverage stemming from consideration of ‘unacceptable impediments to critical agency operations‘ opens the door to potentially problematic waivers. While these are subject to disclosure and notification to OMB, it is not entirely clear on what grounds OMB could challenge those waivers. This is thus an area where the guidance may require further development.

extensions and waivers

In relation to covered safety-impacting or rights-impacting AI (as above), Section 5(a)(i) establishes the important principle that US Federal Government agencies have until 1 August 2024 to implement the minimum practices in Section 5(c), ‘or else stop using any AI that is not compliant with the minimum practices’. This type of sunset clause concerning the currently implicit authorisation for the use of AI is a potentially powerful mechanism. However, the Draft also establishes that such obligation to discontinue non-compliant AI use must be ‘consistent with the details and caveats in that section [5(c)]’, which includes the possibility, until 1 August 2024, for agencies to

request from OMB an extension of limited and defined duration for a particular use of AI that cannot feasibly meet the minimum requirements in this section by that date. The request must be accompanied by a detailed justification for why the agency cannot achieve compliance for the use case in question and what practices the agency has in place to mitigate the risks from noncompliance, as well as a plan for how the agency will come to implement the full set of required minimum practices from this section.

Again, the guidance does not detail on what grounds OMB would grant those extensions or how long they would be for. There is a clear interaction between the extension and waiver mechanism. For example, an agency that saw its request for an extension declined could try to waive that particular AI use—or agencies could simply try to waive AI uses rather than applying for extensions, as the requirements for a waiver seem to be rather different (and potentially less demanding) than those applicable to a waiver. In that regard, it seems that waiver determinations are ‘all or nothing’, whereas the system could be more flexible (and protective) if waiver decisions not only needed to explain why meeting the minimum requirements would generate the heightened overall risks or pose such ‘unacceptable impediments to critical agency operations‘, but also had to meet the lower burden of mitigation currently expected in extension applications, concerning detailed justification for what practices the agency has in place to mitigate the risks from noncompliance where they can be partly mitigated. In other words, it would be preferable to have a more continuous spectrum of mitigation measures in the context of waivers as well.

general minimum practices

Both in relation to safety- and rights-impact AI uses, the Draft AI in Government Policy would require agencies to engage in risk management both before and while using AI.

Preventative measures include:

  • completing an AI Impact Assessment documenting the intended purpose of the AI and its expected benefit, the potential risks of using AI, and and analysis of the quality and appropriateness of the relevant data;

  • testing the AI for performance in a real-world context—that is, testing under conditions that ‘mirror as closely as possible the conditions in which the AI will be deployed’; and

  • independently evaluate the AI, with the particularly important requirement that ‘The independent reviewing authority must not have been directly involved in the system’s development.’ In my view, it would also be important for the independent reviewing authority not to be involved in the future use of the AI, as its (future) operational interest could also be a source of bias in the testing process and the analysis of its results.

In-use measures include:

  • conducting ongoing monitoring and establish thresholds for periodic human review, with a focus on monitoring ‘degradation to the AI’s functionality and to detect changes in the AI’s impact on rights or safety’—‘human review, including renewed testing for performance of the AI in a real-world context, must be conducted at least annually, and after significant modifications to the AI or to the conditions or context in which the AI is used’;

  • mitigating emerging risks to rights and safety—crucially, ‘Where the AI’s risks to rights or safety exceed an acceptable level and where mitigation is not practicable, agencies must stop using the affected AI as soon as is practicable’. In that regard, the draft indicates that ‘Agencies are responsible for determining how to safely decommission AI that was already in use at the time of this memorandum’s release without significant disruptions to essential government functions’, but it would seem that this is also a process that would benefit from close oversight by OMB as it would otherwise jeopardise the effectiveness of the extension and waiver mechanisms discussed above—in which case additional detail in the guidance would be required;

  • ensuring adequate human training and assessment;

  • providing appropriate human consideration as part of decisions that pose a high risk to rights or safety; and

  • providing public notice and plain-language documentation through the AI use case inventory—however, this is subject a large number of caveats (notice must be ‘consistent with applicable law and governmentwide guidance, including those concerning protection of privacy and of sensitive law enforcement, national security, and other protected information’) and more detailed guidance on how to assess these issues would be welcome (if it exists, a cross-reference in the draft policy would be helpful).

additional minimum practices for rights-impacting ai

In relation to rights-affecting AI only, the Draft AI in Government Policy would require agencies to take additional measures.

Preventative measures include:

  • take steps to ensure that the AI will advance equity, dignity, and fairness—including proactively identifying and removing factors contributing to algorithmic discrimination or bias; assessing and mitigating disparate impacts; and using representative data; and

  • consult and incorporate feedback from affected groups.

In-use measures include:

  • conducting ongoing monitoring and mitigation for AI-enabled discrimination;

  • notifying negatively affected individuals—this is an area where the draft guidance is rather woolly, as it also includes a set of complex caveats, as individual notice that ‘AI meaningfully influences the outcome of decisions specifically concerning them, such as the denial of benefits’ must only be given ‘[w]here practicable and consistent with applicable law and governmentwide guidance’. Moreover, the draft only indicates that ‘Agencies are also strongly encouraged to provide explanations for such decisions and actions’, but not required to. In my view, this tackles two of the most important implications for individuals in Government use of AI: the possibility to understand why decisions are made (reason giving duties) and the burden of challenging automated decisions, which is increased if there is a lack of transparency on the automation. Therefore, on this point, the guidance seems too tepid—especially bearing in mind that this requirement only applies to ‘AI whose output serves as a basis for decision or action that has a legal, material, or similarly significant effect on an individual’s’ civil rights, civil liberties, or privacy; equal opportunities; or access to critical resources or services. In these cases, it seems clear that notice and explainability requirements need to go further.

  • maintaining human consideration and remedy processes—including ‘potential remedy to the use of the AI by a fallback and escalation system in the event that an impacted individual would like to appeal or contest the AI’s negative impacts on them. In developing appropriate remedies, agencies should follow OMB guidance on calculating administrative burden and the remedy process should not place unnecessary burden on the impacted individual. When law or governmentwide guidance precludes disclosure of the use of AI or an opportunity for an individual appeal, agencies must create appropriate mechanisms for human oversight of rights-impacting AI’. This is another crucial area concerning rights not to be subjected to fully-automated decision-making where there is no meaningful remedy. This is also an area of the guidance that requires more detail, especially as to what is the adequate balance of burdens where eg the agency can automate the undoing of negative effects on individuals identified as a result of challenges by other individuals or in the context of the broader monitoring of the functioning and effects of the rights-impacting AI. In my view, this would be an opportunity to mandate automation of remediation in a meaningful way.

  • maintaining options to opt-out where practicable.

procurement related practices

In addition to the need for agencies to be able to meet the above requirements in relation to procured AI—which will in itself create the need to cascade some of the requirements down to contractors, and which will be the object of future guidance on how to ensure that AI contracts align with the requirements—the Draft AI in Government Policy also requires that agencies procuring AI manage risks by:

  • aligning to National Values and Law by ensuring ‘that procured AI exhibits due respect for our Nation’s values, is consistent with the Constitution, and complies with all other applicable laws, regulations, and policies, including those addressing privacy, confidentiality, copyright, human and civil rights, and civil liberties’;

  • taking ‘steps to ensure transparency and adequate performance for their procured AI, including by: obtaining adequate documentation of procured AI, such as through the use of model, data, and system cards; regularly evaluating AI-performance claims made by Federal contractors, including in the particular environment where the agency expects to deploy the capability; and considering contracting provisions that incentivize the continuous improvement of procured AI’;

  • taking ‘appropriate steps to ensure that Federal AI procurement practices promote opportunities for competition among contractors and do not improperly entrench incumbents. Such steps may include promoting interoperability and ensuring that vendors do not inappropriately favor their own products at the expense of competitors’ offering’;

  • maximizing the value of data for AI; and

  • responsibly procuring Generative AI.

These high level requirements are well targeted and compliance with them would go a long way to fostering ‘responsible AI procurement’ through adequate risk mitigation in ways that still allow the procurement mechanism to harness market forces to generate value for money.

However, operationalising these requirements will be complex and the further OMB guidance should be rather detailed and practical.

Final thoughts

In my view, the AI Executive Order and the Draft AI in Government Policy lay the foundations for a significant strengthening of the governance of AI procurement with a view to embedding safeguards in public sector AI use. A crucially important characteristic in the design of these governance mechanisms is that it imposes significant duties on the agencies seeking to procure and use the AI, and it explicitly seeks to address risks of commercial capture and commercial determination. Another crucially important characteristic is that, at least in principle, use of AI is made conditional on compliance with a rather comprehensive set of preventative and in-use risk mitigation measures. The general aspects of this governance approach thus offer a very valuable blueprint for other jurisdictions considering how to boost AI procurement governance.

However, as always, the devil is in the details. One of the crucial risks in this approach to AI governance concerns a lack of independence of the entities making the relevant assessments. In the Draft AI in Government Policy, there are some risks of under-inclusion and/or excessive waivers of compliance with the relevant requirements (both explicit and implicit, through protracted processes of decommissioning of non-compliant AI), as well as a risk that ‘practical considerations’ will push compliance with the risk mitigation requirements well past the (ambitious) 1 August 2024 deadline through long or rolling extensions.

To mitigate for this, the guidance should be much clearer on the role of OMB in extension, waiver and decommissioning decisions, as well as in relation to the specific criteria and limits that should form part of those decisions. Only by ensuring adequate OMB intervention can a system of governance that still does not entirely (organisationally) separate procurement, use and oversight decisions reach the levels of independent verification required not only to neutralise commercial determination, but also operational dependency and the ‘policy irresistibility’ of digital technologies.

UK's 'pro-innovation approach' to AI regulation won't do, particularly for public sector digitalisation

Regulating artificial intelligence (AI) has become the challenge of the time. This is a crucial area of regulatory development and there are increasing calls—including from those driving the development of AI—for robust regulatory and governance systems. In this context, more details have now emerged on the UK’s approach to AI regulation.

Swimming against the tide, and seeking to diverge from the EU’s regulatory agenda and the EU AI Act, the UK announced a light-touch ‘pro-innovation approach’ in its July 2022 AI regulation policy paper. In March 2023, the same approach was supported by a Report of the Government Chief Scientific Adviser (the ‘GCSA Report’), and is now further developed in the White Paper ‘AI regulation: a pro-innovation approach’ (the ‘AI WP’). The UK Government has launched a public consultation that will run until 21 June 2023.

Given the relevance of the issue, it can be expected that the public consultation will attract a large volume of submissions, and that the ‘pro-innovation approach’ will be heavily criticised. Indeed, there is an on-going preparatory Parliamentary Inquiry on the Governance of AI that has already collected a wealth of evidence exploring the pros and cons of the regulatory approach outlined there. Moreover, initial reactions eg by the Public Law Project, the Ada Lovelace Institute, or the Royal Statistical Society have been (to different degrees) critical of the lack of regulatory ambition in the AI WP—while, as could be expected, think tanks closely linked to the development of the policy, such as the Alan Turing Institute, have expressed more positive views.

Whether the regulatory approach will shift as a result of the expected pushback is unclear. However, given that the AI WP follows the same deregulatory approach first suggested in 2018 and is strongly politically/policy entrenched—for the UK Government has self-assessed this approach as ‘world leading’ and claims it will ‘turbocharge economic growth’—it is doubtful that much will necessarily change as a result of the public consultation.

That does not mean we should not engage with the public consultation, but the opposite. In the face of the UK Government’s dereliction of duty, or lack of ideas, it is more important than ever that there is a robust pushback against the deregulatory approach being pursued. Especially in the context of public sector digitalisation and the adoption of AI by the public administration and in the provision of public services, where the Government (unsurprisingly) is unwilling to create regulatory safeguards to protect citizens from its own action.

In this blogpost, I sketch my main areas of concern with the ‘pro-innovation approach’ in the GCSA Report and AI WP, which I will further develop for submission to the public consultation, building on earlier views. Feedback and comments would be gratefully received: a.sanchez-graells@bristol.ac.uk.

The ‘pro-innovation approach’ in the GCSA Report — squaring the circle?

In addition to proposals on the intellectual property (IP) regulation of generative AI, the opening up of public sector data, transport-related, or cyber security interventions, the GCSA Report focuses on ‘core’ regulatory and governance issues. The report stresses that regulatory fragmentation is one of the key challenges, as is the difficulty for the public sector in ‘attracting and retaining individuals with relevant skills and talent in a competitive environment with the private sector, especially those with expertise in AI, data analytics, and responsible data governance‘ (at 5). The report also further hints at the need to boost public sector digital capabilities by stressing that ‘the government and regulators should rapidly build capability and know-how to enable them to positively shape regulatory frameworks at the right time‘ (at 13).

Although the rationale is not very clearly stated, to bridge regulatory fragmentation and facilitate the pooling of digital capabilities from across existing regulators, the report makes a central proposal to create a multi-regulator AI sandbox (at 6-8). The report suggests that it could be convened by the Digital Regulatory Cooperation Forum (DRCF)—which brings together four key regulators (the Information Commissioner’s Office (ICO), Office of Communications (Ofcom), the Competition and Markets Authority (CMA) and the Financial Conduct Authority (FCA))—and that DRCF should look at ways of ‘bringing in other relevant regulators to encourage join up’ (at 7).

The report recommends that the AI sandbox should operate on the basis of a ‘commitment from the participant regulators to make joined-up decisions on regulations or licences at the end of each sandbox process and a clear feedback loop to inform the design or reform of regulatory frameworks based on the insights gathered. Regulators should also collaborate with standards bodies to consider where standards could act as an alternative or underpin outcome-focused regulation’ (at 7).

Therefore, the AI sandbox would not only be multi-regulator, but also encompass (in some way) standard-setting bodies (presumably UK ones only, though), without issues of public-private interaction in decision-making implying the exercise of regulatory public powers, or issues around regulatory capture and risks of commercial determination, being considered at all. The report in general is extremely industry-orientated, eg in stressing in relation to the overarching pacing problem that ‘for emerging digital technologies, the industry view is clear: there is a greater risk from regulating too early’ (at 5), without this being in any way balanced with clear (non-industry) views that the biggest risk is actually in regulating too late and that we are collectively frog-boiling into a ‘runaway AI’ fiasco.

Moreover, confusingly, despite the fact that the sandbox would be hosted by DRCF (of which the ICO is a leading member), the GCSA Report indicates that the AI sandbox ‘could link closely with the ICO sandbox on personal data applications’ (at 8). The fact that the report is itself unclear as to whether eg AI applications with data protection implications should be subjected to one or two sandboxes, or the extent to which the general AI sandbox would need to be integrated with sectoral sandboxes for non-AI regulatory experimentation, already indicates the complexity and dubious practical viability of the suggested approach.

It is also unclear why multiple sector regulators should be involved in any given iteration of a single AI sandbox where there may be no projects within their regulatory remit and expertise. The alternative approach of having an open or rolling AI sandbox mechanism led by a single AI authority, which would then draw expertise and work in collaboration with the relevant sector regulator as appropriate on a per-project basis, seems preferable. While some DRCF members could be expected to have to participate in a majority of sandbox projects (eg CMA and ICO), others would probably have a much less constant presence (eg Ofcom, or certainly the FCA).

Remarkably, despite this recognition of the functional need for a centralised regulatory approach and a single point of contact (primarily for industry’s convenience), the GCSA Report implicitly supports the 2022 AI regulation policy paper’s approach to not creating an overarching cross-sectoral AI regulator. The GCSA Report tries to create a ‘non-institutionalised centralised regulatory function’, nested under DRCF. In practice, however, implementing the recommendation for a single AI sandbox would create the need for the further development of the governance structures of the DRCF (especially if it was to grow by including many other sectoral regulators), or whichever institution ‘hosted it’, or else risk creating a non-institutional AI regulator with the related difficulties in ensuring accountability. This would add a layer of deregulation to the deregulatory effect that the sandbox itself creates (see eg Ranchordas (2021)).

The GCSA Report seems to try to square the circle of regulatory fragmentation by relying on cooperation as a centralising regulatory device, but it does this solely for the industry’s benefit and convenience, without paying any consideration to the future effectiveness of the regulatory framework. This is hard to understand, given the report’s identification of conflicting regulatory constraints, or in its terminology ‘incentives’: ‘The rewards for regulators to take risks and authorise new and innovative products and applications are not clear-cut, and regulators report that they can struggle to trade off the different objectives covered by their mandates. This can include delivery against safety, competition objectives, or consumer and environmental protection, and can lead to regulator behaviour and decisions that prioritise further minimising risk over supporting innovation and investment. There needs to be an appropriate balance between the assessment of risk and benefit’ (at 5).

This not only frames risk-minimisation as a negative regulatory outcome (and further feeds into the narrative that precautionary regulatory approaches are somehow not legitimate because they run against industry goals—which deserves strong pushback, see eg Kaminski (2022)), but also shows a main gap in the report’s proposal for the single AI sandbox. If each regulator has conflicting constraints, what evidence (if any) is there that collaborative decision-making will reduce, rather than exacerbate, such regulatory clashes? Are decisions meant to be arrived at by majority voting or in any other way expected to deactivate (some or most) regulatory requirements in view of (perceived) gains in relation to other regulatory goals? Why has there been no consideration of eg the problems encountered by concurrency mechanisms in the application of sectoral and competition rules (see eg Dunne (2014), (2020) and (2021)), as an obvious and immediate precedent of the same type of regulatory coordination problems?

The GCSA report also seems to assume that collaboration through the AI sandbox would be resource neutral for participating regulators, whereas it seems reasonable to presume that this additional layer of regulation (even if not institutionalised) would require further resources. And, in any case, there does not seem to be much consideration as to the viability of asking of resource-strapped regulators to create an AI sandbox where they can (easily) be out-skilled and over-powered by industry participants.

In my view, the GCSA Report already points at significant weaknesses in the resistance to creating any new authorities, despite the obvious functional need for centralised regulation, which is one of the main weaknesses, or the single biggest weakness, in the AI WP—as well as in relation to a lack of strategic planning around public sector digital capabilities, despite well-recognised challenges (see eg Committee of Public Accounts (2021)).

The ‘pro-innovation approach’ in the AI WP — a regulatory blackhole, privatisation of ai regulation, or both

The AI WP envisages an ‘innovative approach to AI regulation [that] uses a principles-based framework for regulators to interpret and apply to AI within their remits’ (para 36). It expects the framework to ‘pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative’ (para 37). As will become clear, however, such ‘innovative approach’ solely amounts to the formulation of high-level, broad, open-textured and incommensurable principles to inform a soft law push to the development of regulatory practices aligned with such principles in a highly fragmented and incomplete regulatory landscape.

The regulatory framework would be built on four planks (para 38): [i] an AI definition (paras 39-42); [ii] a context-specific approach (ie a ‘used-based’ approach, rather than a ‘technology-led’ approach, see paras 45-47); [iii] a set of cross-sectoral principles to guide regulator responses to AI risks and opportunities (paras 48-54); and [iv] new central functions to support regulators to deliver the AI regulatory framework (paras 70-73). In reality, though, there will be only two ‘pillars’ of the regulatory framework and they do not involve any new institutions or rules. The AI WP vision thus largely seems to be that AI can be regulated in the UK in a world-leading manner without doing anything much at all.

AI Definition

The UK’s definition of AI will trigger substantive discussions, especially as it seeks to build it around ‘the two characteristics that generate the need for a bespoke regulatory response’: ‘adaptivity’ and ‘autonomy’ (para 39). Discussing the definitional issue is beyond the scope of this post but, on the specific identification of the ‘autonomy’ of AI, it is worth highlighting that this is an arguably flawed regulatory approach to AI (see Soh (2023)).

No new institutions

The AI WP makes clear that the UK Government has no plans to create any new AI regulator, either with a cross-sectoral (eg general AI authority) or sectoral remit (eg an ‘AI in the public sector authority’, as I advocate for). The Ministerial Foreword to the AI WP already stresses that ‘[t]o ensure our regulatory framework is effective, we will leverage the expertise of our world class regulators. They understand the risks in their sectors and are best placed to take a proportionate approach to regulating AI’ (at p2). The AI WP further stresses that ‘[c]reating a new AI-specific, cross-sector regulator would introduce complexity and confusion, undermining and likely conflicting with the work of our existing expert regulators’ (para 47). This however seems to presume that a new cross-sector AI regulator would be unable to coordinate with existing regulators, despite the institutional architecture of the regulatory framework foreseen in the AI WP entirely relying on inter-regulator collaboration (!).

No new rules

There will also not be new legislation underpinning regulatory activity, although the Government claims that the WP AI, ‘alongside empowering regulators to take a lead, [is] also setting expectations‘ (at p3). The AI WP claims to develop a regulatory framework underpinned by five principles to guide and inform the responsible development and use of AI in all sectors of the economy: [i] Safety, security and robustness; [ii] Appropriate transparency and explainability; [iii] Fairness; [iv] Accountability and governance; and [v] Contestability and redress (para 10). However, they will not be put on a statutory footing (initially); ‘the principles will be issued on a non-statutory basis and implemented by existing regulators’ (para 11). While there is some detail on the intended meaning of these principles (see para 52 and Annex A), the principles necessarily lack precision and, worse, there is a conflation of the principles with other (existing) regulatory requirements.

For example, it is surprising that the AI WP describes fairness as implying that ‘AI systems should (sic) not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes‘ (emphasis added), and stresses the expectation ‘that regulators’ interpretations of fairness will include consideration of compliance with relevant law and regulation’ (para 52). This encapsulates the risks that principles-based AI regulation ends up eroding compliance with and enforcement of current statutory obligations. A principle of AI fairness cannot modify or exclude existing legal obligations, and it should not risk doing so either.

Moreover, the AI WP suggests that, even if the principles are supported by a statutory duty for regulators to have regard to them, ‘while the duty to have due regard would require regulators to demonstrate that they had taken account of the principles, it may be the case that not every regulator will need to introduce measures to implement every principle’ (para 58). This conflates two issues. On the one hand, the need for activity subjected to regulatory supervision to comply with all principles and, on the other, the need for a regulator to take corrective action in relation to any of the principles. It should be clear that regulators have a duty to ensure that all principles are complied with in their regulatory remit, which does not seem to entirely or clearly follow from the weaker duty to have due regard to the principles.

perpetuating regulatory gaps, in particular regarding public sector digitalisation

As a consequence of the lack of creation of new regulators and the absence of new legislation, it is unclear whether the ‘regulatory strategy’ in the AI WP will have any real world effects within existing regulatory frameworks, especially as the most ambitious intervention is to create ‘a statutory duty on regulators requiring them to have due regard to the principles’ (para 12)—but the Government may decide not to introduce it if ‘monitoring of the effectiveness of the initial, non-statutory framework suggests that a statutory duty is unnecessary‘ (para 59).

However, what is already clear that there is no new AI regulation in the horizon despite the fact that the AI WP recognises that ‘some AI risks arise across, or in the gaps between, existing regulatory remits‘ (para 27), that ‘there may be AI-related risks that do not clearly fall within the remits of the UK’s existing regulators’ (para 64), and the obvious and worrying existence of high risks to fundamental rights and values (para 4 and paras 22-25). The AI WP is naïve, to say the least, in setting out that ‘[w]here prioritised risks fall within a gap in the legal landscape, regulators will need to collaborate with government to identify potential actions. This may include identifying iterations to the framework such as changes to regulators’ remits, updates to the Regulators’ Code, or additional legislative intervention’ (para 65).

Hoping that such risk identification and gap analysis will take place without assigning specific responsibility for it—and seeking to exempt the Government from such responsibility—seems a bit too much to ask. In fact, this is at odds with the graphic depiction of how the AI WP expects the system to operate. As noted in (1) in the graph below, it is clear that the identification of risks that are cross-cutting or new (unregulated) risks that warrant intervention is assigned to a ‘central risk function’ (more below), not the regulators. Importantly, the AI WP indicates that such central function ‘will be provided from within government’ (para 15 and below). Which then raises two questions: (a) who will have the responsibility to proactively screen for such risks, if anyone, and (b) how has the Government not already taken action to close the gaps it recognises exists in the current legal landscape?

AI WP Figure 2: Central risks function activities.

This perpetuates the current regulatory gaps, in particular in sectors without a regulator or with regulators with very narrow mandates—such as the public sector and, to a large extent, public services. Importantly, this approach does not create any prohibition of impermissible AI uses, nor sets any (workable) set of minimum requirements for the deployment of AI in high-risk uses, specially in the public sector. The contrast with the EU AI Act could not be starker and, in this aspect in particular, UK citizens should be very worried that the UK Government is not committing to any safeguards in the way technology can be used in eg determining access to public services, or by the law enforcement and judicial system. More generally, it is very worrying that the AI WP does not foresee any safeguards in relation to the quickly accelerating digitalisation of the public sector.

Loose central coordination leading to ai regulation privatisation

Remarkably, and in a similar functional disconnect as that of the GCSA Report (above), the decision not to create any new regulator/s (para 15) is taken in the same breath as the AI WP recognises that the small coordination layer within the regulatory architecture proposed in the 2022 AI regulation policy paper (ie, largely, the approach underpinning the DRCF) has been heavily criticised (para 13). The AI WP recognises that ‘the DRCF was not created to support the delivery of all the functions we have identified or the implementation of our proposed regulatory framework for AI’ (para 74).

The AI WP also stresses how ‘[w]hile some regulators already work together to ensure regulatory coherence for AI through formal networks like the AI and digital regulations service in the health sector and the Digital Regulation Cooperation Forum (DRCF), other regulators have limited capacity and access to AI expertise. This creates the risk of inconsistent enforcement across regulators. There is also a risk that some regulators could begin to dominate and interpret the scope of their remit or role more broadly than may have been intended in order to fill perceived gaps in a way that increases incoherence and uncertainty’ (para 29), which points at a strong functional need for a centralised approach to AI regulation.

To try and mitigate those regulatory risks and shortcomings, the AI WP proposes the creation of ‘a number of central support functions’, such as [i} a central monitoring function of overall regulatory framework’s effectiveness and the implementation of the principles; [ii] central risk monitoring and assessment; [iii] horizon scanning; [iv] supporting testbeds and sandboxes; [v] advocacy, education and awareness-raising initiatives; or [vi] promoting interoperability with international regulatory frameworks (para 14, see also para 73). Cryptically, the AI WP indicates that ‘central support functions will initially be provided from within government but will leverage existing activities and expertise from across the broader economy’ (para 15). Quite how this can be effectively done outwith a clearly defined, adequately resourced and durable institutional framework is anybody’s guess. In fact, the AI WP recognises that this approach ‘needs to evolve’ and that Government needs to understand how ‘existing regulatory forums could be expanded to include the full range of regulators‘, what ‘additional expertise government may need’, and the ‘most effective way to convene input from across industry and consumers to ensure a broad range of opinions‘ (para 77).

While the creation of a regulator seems a rather obvious answer to all these questions, the AI WP has rejected it in unequivocal terms. Is the AI WP a U-turn waiting to happen? Is the mention that ‘[a]s we enter a new phase we will review the role of the AI Council and consider how best to engage expertise to support the implementation of the regulatory framework’ (para 78) a placeholder for an imminent project to rejig the AI Council and turn it into an AI regulator? What is the place and role of the Office for AI and the Centre for Data Ethics and Innovation in all this?

Moreover, the AI WP indicates that the ‘proposed framework is aligned with, and supplemented by, a variety of tools for trustworthy AI, such as assurance techniques, voluntary guidance and technical standards. Government will promote the use of such tools’ (para 16). Relatedly, the AI WP relies on those mechanisms to avoid addressing issues of accountability across AI life cycle, indicating that ‘[t]ools for trustworthy AI like assurance techniques and technical standards can support supply chain risk management. These tools can also drive the uptake and adoption of AI by building justified trust in these systems, giving users confidence that key AI-related risks have been identified, addressed and mitigated across the supply chain’ (para 84). Those tools are discussed in much more detail in part 4 of the AI WP (paras 106 ff). Annex A also creates a backdoor for technical standards to directly become the operationalisation of the general principles on which the regulatory framework is based, by explicitly identifying standards regulators may want to consider ‘to clarify regulatory guidance and support the implementation of risk treatment measures’.

This approach to the offloading of tricky regulatory issues to the emergence of private-sector led standards is simply an exercise in the transfer of regulatory power to those setting such standards, guidance and assurance techniques and, ultimately, a privatisation of AI regulation.

A different approach to sandboxes and testbeds?

The Government will take forward the GCSA recommendation to establish a regulatory sandbox for AI, which ‘will bring together regulators to support innovators directly and help them get their products to market. The sandbox will also enable us to understand how regulation interacts with new technologies and refine this interaction where necessary’ (p2). This thus is bound to hardwire some of the issues mentioned above in relation to the GCSA proposal, as well as being reflective of the general pro-industry approach of the AI WP, which is obvious in the framing that the regulators are expected to ‘support innovators directly and help them get their products to market’. Industrial policy seems to be shoehorned and mainstreamed across all areas of regulatory activity, at least in relation to AI (but it can then easily bleed into non-AI-related regulatory activities).

While the AI WP indicates the commitment to implement the AI sandbox recommended in the GCSA Report, it is by no means clear that the implementation will be in the way proposed in the report (ie a multi-regulator sandbox nested under DRCF, with an expectation that it would develop a crucial coordination and regulatory centralisation effect). The AI WP indicates that the Government still has to explore ‘what service focus would be most useful to industry’ in relation to AI sandboxes (para 96), but it sets out the intention to ‘focus an initial pilot on a single sector, multiple regulator sandbox’ (para 97), which diverges from the approach in the GCSA Report, which would be that of a sandbox for ‘multiple sectors, multiple regulators’. While the public consultation intends to gather feedback on which industry sector is the most appropriate, I would bet that the financial services sector will be chosen and that the ‘regulatory innovation’ will simply result in some closer cooperation between the ICO and FCA.

Regulator capabilities — ai regulation on a shoestring?

The AI WP turns to the issue of regulator capabilities and stresses that ‘While our approach does not currently involve or anticipate extending any regulator’s remit, regulating AI uses effectively will require many of our regulators to acquire new skills and expertise’ (para 102), and that the Government has ‘identified potential capability gaps among many, but not all, regulators’ (para 103).

To try to (start to) address this fundamental issue in the context of a devolved and decentralised regulatory framework, the AI WP indicates that the Government will explore, for example, whether it is ‘appropriate to establish a common pool of expertise that could establish best practice for supporting innovation through regulatory approaches and make it easier for regulators to work with each other on common issues. An alternative approach would be to explore and facilitate collaborative initiatives between regulators – including, where appropriate, further supporting existing initiatives such as the DRCF – to share skills and expertise’ (para 105).

While the creation of ‘common regulatory capacity’ has been advocated by the Alan Turing Institute, and while this (or inter-regulator secondments, for example) could be a short term fix, it seems that this tries to address the obvious challenge of adequately resourcing regulatory bodies without a medium and long-term strategy to build up the digital capability of the public sector, and to perpetuate the current approach to AI regulation on a shoestring. The governance and organisational implications arising from the creation of common pool of expertise need careful consideration, in particular as some of the likely dysfunctionalities are only marginally smaller than current over-reliance on external consultants, or the ‘salami-slicing’ approach to regulatory and policy interventions that seems to bleed from the ’agile’ management of technological projects into the realm of regulatory activity, which however requires institutional memory and the embedding of knowledge and expertise.