Final EU model contractual AI Clauses available -- some thoughts on regulatory tunnelling

Source: https://tinyurl.com/mrx9sbz8.

The European Commission has published the final version of the EU model contractual AI clauses to pilot in procurements of AI, which have been ‘developed for pilot use in the procurement of AI with the aim to establish responsibilities for trustworthy, transparent, and accountable development of AI technologies between the supplier and the public organisation.’

The model AI clauses have been developed by reference to the (future) obligations arising from the EU AI Act currently under advanced stages of negotiation. This regulatory technique simply seeks to allow public buyers to ensure compliance with the EU AI Act by cascading the relevant obligations and requirements down to tech providers (largely on a back to back basis). By the same regulatory logic, this technique will be a conveyor belt for the shortcomings of the EU AI Act, which will be embedded in public contracts using the clauses. It is thus important to understand the shortcomings inherent to this approach and to the model AI clauses, before assuming that their use will actually ensure the ‘trustworthy, transparent, and accountable development [and deployment] of AI technologies’. Much more is needed than mere reliance on the model AI clauses.

Two sets of model AI clauses

The EU AI Act will not be applicable to all types of AI use. Remarkably, most requirements will be limited to ‘high-risk AI uses’ as defined in its Article 6. This immediately translates into the generation of two sets of model AI clauses: one for ‘high-risk’ AI procurement, which embeds the requirements expected to arise from the EU AI Act once finalised, and another ‘light version’ for non-high-risk AI procurement, which would support the voluntary extension of some of those requirements to the procurement of AI for other uses, or even to the use of other types of algorithmic solutions not meeting the regulatory definition of AI.

A first observation is that the controversy surrounding the definition of ‘high-risk’ in the EU AI Act immediately carries over to the model AI clauses and to the choice of ‘demanding’ vs light version. While the original proposal of the EU AI Act contained a numerus clausus of high-risk uses (which was already arguably too limited, see here), the trilogue negotiations could well end suppressing a pre-defined classification and leaving it to AI providers to (self)assess whether the use would be ‘high-risk’.

This has been heavily criticised in a recent open letter. If the final version of the EU AI Act ended up embedding such a self-assessment of what uses are bound to be high-risk, there would be clear risks of gaming of the self-assessment to avoid compliance with the heightened obligations under the Act (and it is unclear that the system of oversight and potential fines foreseen in the EU AI Act would suffice to prevent this). This would directly translate into a risk of gaming (or strategic opportunism) in the choice between ‘demanding’ vs light version of the model AI clauses by public buyers as well.

As things stand today, it seems that most procurement of AI will be subject to the light version of the model AI clauses, where contracting authorities will need to decide which clauses to use and which standards to refer to. Importantly, the light version does not include default options in relation to quality management, conformity assessments, corrective actions, inscription in an AI register, or compliance and audit (some of which are also optional under the ‘demanding’ model). This means that, unless public buyers are familiar with both sets of model AI clauses, taking the light version as a starting point already generates a risk of under-inclusiveness and under-regulation.

Limitations in the model AI clauses

The model AI clauses come with some additional ‘caveat emptor’ warnings. As the Commission has stressed in the press release accompanying the model AI clauses:

The EU model contractual AI clauses contain provisions specific to AI Systems and on matters covered by the proposed AI Act, thus excluding other obligations or requirements that may arise under relevant applicable legislation such as the General Data Protection Regulation. Furthermore, these EU model contractual AI clauses do not comprise a full contractual arrangement. They need to be customized to each specific contractual context. For example, EU model contractual AI clauses do not contain any conditions concerning intellectual property, acceptance, payment, delivery times, applicable law or liability. The EU model contractual AI clauses are drafted in such a way that they can be attached as a schedule to an agreement in which such matters have already been laid down.

This is an important warning, as the sole remit of the model AI clauses links back to the EU AI Act and, in the case of the light version, only partially.

the link between model AI clauses and standards

However, the most significant shortcoming of the model AI clauses is that, by design, they do not include any substantive or material constraints or requirements on the development and use of AI. All substantive obligations are meant to be incorporated by reference to the (harmonised) standards to be developed under the EU AI Act, other sets of standards or, more generally, the state-of-the-art. Plainly, there is no definition or requirement in the model AI clauses that establishes the meaning of eg trustworthiness—and there is thus no baseline safety net ensuring it. Similarly, most requirements are offloaded to (yet to emerge) standards or the technical and organisational measures devised by the parties. For example,

  • Obligations on record-keeping (Art 5 high-risk model) refer to capabilities conforming ‘to state of the art and, if available, recognised standards or common specifications. <Optional: add, if available, a specific standard>’.

  • Measures to ensure transparency (Art 6 high-risk model) are highly qualified: ‘The Supplier ensures that the AI System has been and shall be designed and developed in such a way that the operation of the AI System is sufficiently transparent to enable the Public Organisation to reasonably understand the system’s functioning’. Moreover, the detail of the technical and organisational measures that need to be implemented to reach those (qualified) goals is left entirely undefined in the relevant Annex (E) — thus leaving the option open for referral to emerging transparency standards.

  • Measures on human oversight (Art 7 high-risk model) are also highly qualified: ‘The Supplier ensures that the AI System has been and shall be designed and developed in such a way, including with appropriate human-machine interface tools, that it can be effectively overseen by natural persons as proportionate to the risks associated with the system’. Although there is some useful description of what ‘human oversight’ should mean as a minimum (Art 7(2)), the detail of the technical and organisational measures that need to be implemented to reach those (qualified) goals is also left entirely undefined in the relevant Annex (F) — thus leaving the option open for referral to emerging ‘human on the loop’ standards.

  • Measures on accuracy, robustness and cybersecurity (Art 8 high-risk model) follow the same pattern. Annexes G and H on levels of accuracy and on measures to ensure an appropriate level of robustness, safety and cybersecurity are also blank. While there can be mandatory obligations stemming from other sources of EU law (eg the NIS 2 Directive), only partial aspects of cybersecurity will be covered, and not in all cases.

  • Measures on the ‘explainability’ of the AI (Art 13 high-risk model) fall short of imposing an absolute requirement of intelligibility of the AI outputs, as the focus is on a technical explanation, rather than a contextual or intuitive explanation.

All in all, the model AI clauses are primarily an empty regulatory shell. Operationalising them will require reliance on (harmonised) standards—eg on transparency, human oversight, accuracy, explainability … — or, most likely (at least until such standards are in place) significant additional concretisation by the public buyer seeking to rely on the model AI clauses.

For the reasons identified in my previous research, I think this is likely to generate regulatory tunnelling and to give the upper hand to AI providers in making sure they can comfortably live with requirements in any specific contract. The regulatory tunnelling stems from the fact that all meaningful requirements and constraints are offloaded to the (harmonised) standards to be developed. And it is no secret that the governance of the standardisation process falls well short of ensuring that the resulting standards will embed high levels of protection of the desired regulatory goals — some of which are very hard to define in ways that can be translated into procurement or contractual requirements anyway.

Moreover, public buyers with limited capabilities will struggle to use the model AI clauses in ways that meaningfully ‘establish responsibilities for trustworthy, transparent, and accountable development [and deployment] of AI technologies’—other than in relation to those standards. My intuition is that the content of the all too relevant schedules in the model AI clauses will either simply refer to emerging standards or where there is no standard or the standard is for whatever reason considered inadequate, be left for negotiation with tech providers, or be part of the evaluation (eg tenderers will be required to detail how they propose to regulate eg accuracy). Whichever way this goes, this puts the public buyer in a position of rule-taker.

Only very few, well-resourced, highly skilled public buyers (if any) would be able to meaningfully flesh out a comprehensive set of requirements in the relevant annexes to give the model AI clauses sufficient bite. And they would not benefit much from the model AI clauses as it is unlikely that in their sophistication they would not have already come up with similar solutions. Therefore, at best, the contribution of the model AI clauses is rather marginal and, at worse, it comes with a significant risk of regulatory complacency.

final thoughts

indeed, given all of this, it is clear that the model IA clauses generate a risk if (non-sophisticated/most) public buyers think that relying on them will deal with the many and complex challenges inherent to the acquisition of AI. And an even bigger risk if we collectively think that the existence of such model AI clauses is all the regulation of AI procurement we need. This is not a criticism of the clauses in themselves, but rather of the technique of ‘regulation by contract’ that underlies it and of the broader approach followed by the European Commission and other regulators (including the UK’s)!

I have demonstrated how this is a flawed regulatory strategy in my forthcoming book Digital Technologies and Public Procurement. Gatekeeping and Experimentation in Digital Public Governance (OUP) and in many working papers resulting from the project with the same title. In my view, we need to do a lot more if we want to make sure that the public sector only procures and uses trustworthy AI technologies. We need to create a regulatory system that assigns to an independent authority both the permissioning of the procurement of AI and the certification of the standards underpinning such procurement. In the absence of such regulatory developments, we cannot meaningfully claim that the procurement of AI will be in line with the values and goals to be expected from ‘responsible’ AI use.

I will further explore these issues in a public lecture on 23 November 2023 at University College London. All welcome: Hybrid | Responsibly Buying Artificial Intelligence: A Regulatory Hallucination? | UCL Faculty of Laws - UCL – University College London.

Source: https://public-buyers-community.ec.europa....

AI in the public sector: can procurement promote trustworthy AI and avoid commercial capture?

The recording and slides of the public lecture on ‘AI in the public sector: can procurement promote trustworthy AI and avoid commercial capture?’ I gave at the University of Bristol Law School on 4 July 2023 are now available. As always, any further comments most warmly received at: a.sanchez-graells@bristol.ac.uk.

This lecture brought my research project to an end. I will now focus on finalising the manuscript and sending it off to the publisher, and then take a break for the rest of the summer. I will share details of the forthcoming monograph in a few months. I hope to restart blogging in September. in the meantime, I wish all HTCaN friends all the best. Albert

"Can Procurement Be Used to Effectively Regulate AI?" [recording]

The recording and slides for yesterday’s webinar on ‘Can Procurement Be Used to Effectively Regulate AI?’ co-hosted by the University of Bristol Law School and the GW Law Government Procurement Programme are now available for catch up if you missed it.

I would like to thank once again Dean Jessica Tillipman (GW Law), Dr Aris Georgopoulos (Nottingham), Elizabeth "Liz" Chirico (Acquisition Innovation Lead at Office of the Deputy Assistant Secretary of the Army - Procurement) and Scott Simpson (Digital Transformation Lead, Department of Homeland Security Office of the Chief Procurement Officer - Procurement Innovation Lab) for really interesting discussion, and to all participants for their questions. Comments most welcome, as always.

Free registration open for two events on procurement and artificial intelligence

Registration is now open for two free events on procurement and artificial intelligence (AI).

First, a webinar where I will be participating in discussions on the role of procurement in contributing to the public sector’s acquisition of trustworthy AI, and the associated challenges, from an EU and US perspective.

Second, a public lecture where I will present the findings of my research project on digital technologies and public procurement.

Please scroll down for details and links to registration pages. All welcome!

1. ‘Can Procurement Be Used to Effectively Regulate AI?’ | Free online webinar
30 May 2023 2pm BST / 3pm CET-SAST / 9am EST (90 mins)
Co-organised by University of Bristol Law School and George Washington University Law School.

Artificial Intelligence (“AI”) regulation and governance is a global challenge that is starting to generate different responses in the EU, US, and other jurisdictions. Such responses are, however, rather tentative and politically contested. A full regulatory system will take time to crystallise and be fully operational. In the meantime, despite this regulatory gap, the public sector is quickly adopting AI solutions for a wide range of activities and public services.

This process of accelerated AI adoption by the public sector places procurement as the (involuntary) gatekeeper, tasked with ‘AI regulation by contract’, at least for now. The procurement function is expected to design tender procedures and contracts capable of attaining goals of AI regulation (such as trustworthiness, explainability, or compliance with data protection and human and fundamental rights) that are so far eluding more general regulation.

This webinar will provide an opportunity to take a hard look at the likely effectiveness of AI regulation by contract through procurement and its implications for the commercialisation of public governance, focusing on key issues such as:

  • The interaction between tender design, technical standards, and negotiations.

  • The challenges of designing, monitoring, and enforcing contractual clauses capable of delivering effective ‘regulation by contract’ in the AI space.

  • The tension between the commercial value of tailored contractual design and the regulatory value of default clauses and standard terms.

  • The role of procurement disputes and litigation in shaping AI regulation by contract.

  • The alternative regulatory option of establishing mandatory prior approval by an independent regulator of projects involving AI adoption by the public sector.

This webinar will be of interest to those working on or researching the digitalisation of the public sector and AI regulation in general, as the discussion around procurement gatekeeping mirrors the main issues arising from broader trends.

I will have the great opportunity of discussing my research with Aris Georgopoulos (Nottingham), Scott Simpson (Digital Transformation Lead at U.S. Department of Homeland Security), and Liz Chirico (Acquisition Innovation Lead at Office of the Deputy Assistant Secretary of the Army). Jessica Tillipman (GW Law) will moderate the discussion and Q&A.

Registration: https://law-gwu-edu.zoom.us/webinar/register/WN_w_V9s_liSiKrLX9N-krrWQ.

2. ‘AI in the public sector: can procurement promote trustworthy AI and avoid commercial capture?’ | Free in-person public lecture
4 July 2023 2pm BST, Reception Room, Wills Memorial Building, University of Bristol
Organised by University of Bristol Law School, Centre for Global Law and Innovation

The public sector is quickly adopting artificial intelligence (AI) to manage its interactions with citizens and in the provision of public services – for example, using chatbots in official websites, automated processes and call-centres, or predictive algorithms.

There are inherent high stakes risks to this process of public governance digitalisation, such as bias and discrimination, unethical deployment, data and privacy risks, cyber security risks, or risks of technological debt and dependency on proprietary solutions developed by (big) tech companies.

However, as part of the UK Government’s ‘light touch’ ‘pro-innovation’ approach to digital technology regulation, the adoption of AI in the public sector remains largely unregulated. 

In this public lecture, I will present the findings of my research funded by the British Academy, analysing how, in this deregulatory context, the existing rules on public procurement fall short of protecting the public interest.

An alternative approach is required to create mechanisms of external independent oversight and mandatory standards to embed trustworthy AI requirements and to mitigate against commercial capture in the acquisition of AI solutions. 

Registration: https://www.eventbrite.co.uk/e/can-procurement-promote-trustworthy-ai-and-avoid-commercial-capture-tickets-601212712407.

Some further thoughts on setting procurement up to fail in 'AI regulation by contract'

The next bit of my reseach project concerns the leveraging of procurement to achieve ‘AI regulation by contract’ (ie to ensure in the use of AI by the public sector: trustworthiness, safety, explainability, human rights compliance, legality especially in data protection terms, ethical use, etc), so I have been thinking about it for the last few weeks to build on my previous views (see here).

In this post, I summarise my further thoughts — which have been prompted by the rich submissions to the House of Commons Science and Technology Committee [ongoing] inquiry on the ‘Governance of Artificial Intelligence’.

Let’s do it via procurement

As a starting point, it is worth stressing that the (perhaps unsurprising) increasingly generalised position is that procurement has a key role to play in regulating the adoption of digital technologies (and AI in particular) by the public sector—which consolidates procurement’s gatekeeping role in this regulatory space (see here).

More precisely, the generalised view is not that procurement ought to play such a role, but that it can do so (effectively and meaningfully). ‘AI regulation by contract’ via procurement is seen as an (easily?) actionable policy and governance mechanism despite the more generalised reluctance and difficulties in regulating AI through general legislative and policy measures, and in creating adequate governance architectures (more below).

This is very clear in several submissions to the ongoing Parliamentary inquiry (above). Without seeking to be exhaustive (I have read most, but not all submissions yet), the following points have been made in written submissions (liberally grouped by topics):

Procurement as (soft) AI regulation by contract & ‘Market leadership’

  • Procurement processes can act as a form of soft regulation Government should use its purchasing power in the market to set procurement requirements that ensure private companies developing AI for the public sector address public standards. ’ (Committee on Standards in Public Life, at [25]-[26], emphasis added).

  • For public sector AI projects, two specific strategies could be adopted [to regulate AI use]. The first … is the use of strategic procurement. This approach utilises government funding to drive change in how AI is built and implemented, which can lead to positive spill-over effects in the industry’ (Oxford Internet Institute, at 5, emphasis added).

  • Responsible AI Licences (“RAILs”) utilise the well-established mechanisms of software and technology licensing to promote self-governance within the AI sector. RAILs allow developers, researchers, and companies to publish AI innovations while specifying restrictions on the use of source code, data, and models. These restrictions can refer to high-level restrictions (e.g., prohibiting uses that would discriminate against any individual) as well as application-specific restrictions (e.g., prohibiting the use of a facial recognition system without consent) … The adoption of such licenses for AI systems funded by public procurement and publicly-funded AI research will help support a pro-innovation culture that acknowledges the unique governance challenges posed by emerging AI technologies’ (Trustworthy Autonomous Systems Hub, at 4, emphasis added).

Procurement and AI explainability

  • public bodies will need to consider explainability in the early stages of AI design and development, and during the procurement process, where requirements for transparency could be stipulated in tenders and contracts’ (Committee on Standards in Public Life, at [17], emphasis added).

  • In the absence of strong regulations, the public sector may use strategic procurement to promote equitable and transparent AI … mandating various criteria in procurement announcements and specifying design criteria, including explainability and interpretability requirements. In addition, clear documentation on the function of a proposed AI system, the data used and an explanation of how it works can help. Beyond this, an approved vendor list for AI procurement in the public sector is useful, to which vendors that agree to meet the defined transparency and explainability requirements may be added’ (Oxford Internet Institute, at 2, referring to K McBride et al (2021) ‘Towards a Systematic Understanding on the Challenges of Procuring Artificial Intelligence in the Public Sector’, emphasis added).

Procurement and AI ethics

  • For example, procurement processes should be designed so products and services that facilitate high standards are preferred and companies that prioritise ethical practices are rewarded. As part of the commissioning process, the government should set out the ethical principles expected of companies providing AI services to the public sector. Adherence to ethical standards should be given an appropriate weighting as part of the evaluation process, and companies that show a commitment to them should be scored more highly than those that do not (Committee on Standards in Public Life, at [26], emphasis added).

Procurement and algorithmic transparency

  • … unlike public bodies, the private sector is not bound by the same safeguards – such as the Public Sector Equality Duty within the Equality Act 2010 (EA) – and is able to shield itself from criticisms regarding transparency behind the veil of ‘commercial sensitivity’. In addition to considering the private company’s purpose, AI governance itself must cover the private as well as public sphere, and be regulated to the same, if not a higher standard. This could include strict procurement rules – for example that private companies need to release certain information to the end user/public, and independent auditing of AI systems’ (Liberty, at [20]).

  • … it is important that public sector agencies are duly empowered to inspect the technologies they’re procuring and are not prevented from doing so by the intellectual property rights. Public sector buyers should use their purchasing power to demand access to suppliers’ systems to test and prove their claims about, for example, accuracy and bias’ (BILETA, at 6).

Procurement and technical standards

  • Standards hold an important role in any potential regulatory regime for AI. Standards have the potential to improve transparency and explainability of AI systems to detail data provenance and improve procurement requirements’ (Ada Lovelace Institute, at 10)

  • The speed at which the technology can develop poses a challenge as it is often faster than the development of both regulation and standards. Few mature standards for autonomous systems exist and adoption of emerging standards need to be encouraged through mechanisms such as regulation and procurement, for example by including the requirement to meet certain standards in procurement specification’ (Royal Academy of Engineering, at 8).

Can procurement do it, though?

Implicit in most views about the possibility of using procurement to regulate public sector AI adoption (and to generate broader spillover effects through market-based propagation mechanisms) is an assumption that the public buyer does (or can get to) know and can (fully, or sufficiently) specify the required standards of explainability, transparency, ethical governance, and a myriad other technical requirements (on auditability, documentation, etc) for the use of AI to be in the public interest and fully legally compliant. Or, relatedly, that such standards can (and will) be developed and readily available for the public buyer to effectively refer to and incorporate them into its public contracts.

This is a BIG implicit assumption, at least in relation with non trivial/open-ended proceduralised requirements and in relation to most of the complex issues raised by (advanced) forms of AI deployment. A sobering and persuasive analysis has shown that, at least for some forms of AI (based on neural networks), ‘it appears unlikely that anyone will be able to develop standards to guide development and testing that give us sufficient confidence in the applications’ respect for health and fundamental rights. We can throw risk management systems, monitoring guidelines, and documentation requirements around all we like, but it will not change that simple fact. It may even risk giving us a false sense of confidence’ [H Pouget, ‘The EU’s AI Act Is Barreling Toward AI Standards That Do Not Exist’ (Lawfare.com, 12 Jan 2023)].

Even for less complex AI deployments, the development of standards will be contested and protracted. This not only creates a transient regulatory gap that forces public buyers to ‘figure it out’ by themselves in the meantime, but can well result in a permanent regulatory gap that leaves procurement as the only safeguard (on paper) in the process of AI adoption in the public sector. If more general and specialised processes of standard setting are unlikely to plug that gap quickly or ever, how can public buyers be expected to do otherwise?

seriously, can procurement do it?

Further, as I wrote in my own submission to the Parliamentary inquiry, ‘to effectively regulate by contract, it is at least necessary to have (i) clarity on the content of the obligations to be imposed, (ii) effective enforcement mechanisms, and (iii) public sector capacity to establish, monitor, and enforce those obligations. Given that the aim of regulation by contract would be to ensure that the public sector only adopts trustworthy AI solutions and deploys them in a way that promotes the public interest in compliance with existing standards of protection of fundamental and individual rights, exercising the expected gatekeeping role in this context requires a level of legal, ethical, and digital capability well beyond the requirements of earlier instances of regulation by contract to eg enforce labour standards’ (at [4]).

Even optimistically ignoring the issues above and adopting the presumption that standards will emerge or the public buyer will be able to (eventually) figure it out (so we park requirement (i) for now), and also assuming that the public sector will be able to develop the required level of eg digital capability (so we also park (iii), but see here)), does however not overcome other obstacles to leveraging procurement for ‘AI regulation by contract’. In particular, it does not address the issue of whether there can be effective enforcement mechanisms within the contractual relationship resulting from a procurement process to impose compliance with the required standards (of explainability, transparency, ethical use, non-discrimination, etc).

I approach this issue as the challenge of enforcing not entirely measurable contractual obligations (ie obligations to comply with a contractual standard rather than a contractual rule), and the closest parallel that comes to my mind is the issue of enforcing quality requirements in public contracts, especially in the provision of outsourced or contracted-out public services. This is an issue on which there is a rich literature (on ‘regulation by contract’ or ‘government by contract’).

Quality-related enforcement problems relate to the difficulty of using contract law remedies to address quality shortcomings (other than perhaps price reductions or contractual penalties where those are permissible) that can do little to address the quality issues in themselves. Major quality shortcomings could lead to eg contractual termination, but replacing contractors can be costly and difficult (especially in a technological setting affected by several sources of potential vendor and technology lock in). Other mechanisms, such as leveraging past performance evaluations to eg bar access to future procurements can also do too little too late to control quality within a specific contract.

An illuminating analysis of the ‘problem of quality’ concluded that the ‘structural problem here is that reliable assurance of quality in performance depends ultimately not on contract terms but on trust and non-legal relations. Relations of trust and powerful non-legal sanctions depend upon the establishment of long-term … relations … The need for a governance structure and detailed monitoring in order to achieve co-operation and quality seems to lead towards the creation of conflictual relations between government and external contractors’ [see H Collins, Regulating Contracts (OUP 1999) 314-15].

To me, this raises important questions about the extent to which procurement and public contracts more generally can effectively deliver the expected safeguards and operate as an adequate sytem of ‘AI regulation by contract’. It seems to me that price clawbacks or financial penalties, even debarment decisions, are unilkely to provide an acceptable safety net in some (or most) cases — eg high-risk uses of complex AI. Not least because procurement disputes can take a long time to settle and because the incentives will not always be there to ensure strict enforcement anyway.

More thoughts to come

It seems increasingly clear to me that the expectations around the leveraging of procurement to ‘regulate AI by contract’ need reassessing in view of its likely effectiveness. Such effectiveness is constrained by the rules on the design of tenders for the award of public contracts, as well as those public contracts, and mechanisms to resolve disputes emerging from either tenders or contracts. The effectiveness of this approach is, of course, also constrained by public sector (digital) capability and by the broader difficulties in ascertaining the appropriate approach to (standards-based) AI regulation, which cannot so easily be set aside. I will keep thinking about all this in the process of writing my monograph. If this is of interested, keep an eye on this blog fior further thougths and analysis.

AI regulation by contract: submission to UK Parliament

In October 2022, the Science and Technology Committee of the House of Commons of the UK Parliament (STC Committee) launched an inquiry on the ‘Governance of Artificial Intelligence’. This inquiry follows the publication in July 2022 of the policy paper ‘Establishing a pro-innovation approach to regulating AI’, which outlined the UK Government’s plans for light-touch AI regulation. The inquiry seeks to examine the effectiveness of current AI governance in the UK, and the Government’s proposals that are expected to follow the policy paper and provide more detail. The STC Committee has published 98 pieces of written evidence, including submissions from UK regulators and academics that will make for interesting reading. Below is my submission, focusing on the UK’s approach to ‘AI regulation by contract’.

A. Introduction

01. This submission addresses two of the questions formulated by the House of Commons Science and Technology Committee in its inquiry on the ‘Governance of artificial intelligence (AI)’. In particular:

  • How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

  • To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?

    • Is more legislation or better guidance required?

02. This submission focuses on the process of AI adoption in the public sector and, particularly, on the acquisition of AI solutions. It evidences how the UK is consolidating an inadequate approach to ‘AI regulation by contract’ through public procurement. Given the level of abstraction and generality of the current guidelines for AI procurement, major gaps in public sector digital capabilities, and potential structural conflicts of interest, procurement is currently an inadequate tool to govern the process of AI adoption in the public sector. Flanking initiatives, such as the pilot algorithmic transparency standard, are unable to address and mitigate governance risks. Contrary to the approach in the AI Regulation Policy Paper,[1] plugging the regulatory gap will require (i) new legislation supported by a new mechanism of external oversight and enforcement (an ‘AI in the Public Sector Authority’ (AIPSA)); (ii) a well-funded strategy to boost in-house public sector digital capabilities; and (iii) the introduction of a (temporary) mechanism of authorisation of AI deployment in the public sector. The Procurement Bill would not suffice to address the governance shortcomings identified in this submission.

B. ‘AI Regulation by Contract’ through Procurement

03. Unless the public sector develops AI solutions in-house, which is extremely rare, the adoption of AI technologies in the public sector requires a procurement procedure leading to their acquisition. This places procurement at the frontline of AI governance because the ‘rules governing the acquisition of algorithmic systems by governments and public agencies are an important point of intervention in ensuring their accountable use’.[2] In that vein, the Committee on Standards in Public Life stressed that the ‘Government should use its purchasing power in the market to set procurement requirements that ensure that private companies developing AI solutions for the public sector appropriately address public standards. This should be achieved by ensuring provisions for ethical standards are considered early in the procurement process and explicitly written into tenders and contractual arrangements’.[3] Procurement is thus erected as a public interest gatekeeper in the process of adoption of AI by the public sector.

04. However, to effectively regulate by contract, it is at least necessary to have (i) clarity on the content of the obligations to be imposed, (ii) effective enforcement mechanisms, and (iii) public sector capacity to establish, monitor, and enforce those obligations. Given that the aim of regulation by contract would be to ensure that the public sector only adopts trustworthy AI solutions and deploys them in a way that promotes the public interest in compliance with existing standards of protection of fundamental and individual rights, exercising the expected gatekeeping role in this context requires a level of legal, ethical, and digital capability well beyond the requirements of earlier instances of regulation by contract to eg enforce labour standards.

05. On a superficial reading, it could seem that the National AI Strategy tackled this by highlighting the importance of the public sector’s role as a buyer and stressing that the Government had already taken steps ‘to inform and empower buyers in the public sector, helping them to evaluate suppliers, then confidently and responsibly procure AI technologies for the benefit of citizens’.[4] The National AI Strategy referred, in particular, to the setting up of the Crown Commercial Service’s AI procurement framework (the ‘CCS AI Framework’),[5] and the adoption of the Guidelines for AI procurement (the ‘Guidelines’)[6] as enabling tools. However, a close look at these instruments will show their inadequacy to provide clarity on the content of procedural and contractual obligations aimed at ensuring the goals stated above (para 03), as well as their potential to widen the existing public sector digital capability gap. Ultimately, they do not enable procurement to carry out the expected gatekeeping role.

C. Guidelines and Framework for AI procurement

06. Despite setting out to ‘provide a set of guiding principles on how to buy AI technology, as well as insights on tackling challenges that may arise during procurement’, the Guidelines provide high-level recommendations that cannot be directly operationalised by inexperienced public buyers and/or those with limited digital capabilities. For example, the recommendation to ‘Try to address flaws and potential bias within your data before you go to market and/or have a plan for dealing with data issues if you cannot rectify them yourself’ (guideline 3) not only requires a thorough understanding of eg the Data Ethics Framework[7] and the Guide to using Artificial Intelligence in the public sector,[8] but also detailed insights on data hazards.[9] This leads the Guidelines to stress that it may be necessary ‘to seek out specific expertise to support this; data architects and data scientists should lead this process … to understand the complexities, completeness and limitations of the data … available’.

07. Relatedly, some of the recommendations are very open ended in areas without clear standards. For example, the effectiveness of the recommendation to ‘Conduct initial AI impact assessments at the start of the procurement process, and ensure that your interim findings inform the procurement. Be sure to revisit the assessments at key decision points’ (guideline 4) is dependent on the robustness of such impact assessments. However, the Guidelines provide no further detail on how to carry out such assessments, other than a list of some generic areas for consideration (eg ‘potential unintended consequences’) and a passing reference to emerging guidelines in other jurisdictions. This is problematic, as the development of algorithmic impact assessments is still at an experimental stage,[10] and emerging evidence shows vastly diverging approaches, eg to risk identification.[11] In the absence of clear standards, algorithmic impact assessments will lead to inconsistent approaches and varying levels of robustness. The absence of standards will also require access to specialist expertise to design and carry out the assessments.

08. Ultimately, understanding and operationalising the Guidelines requires advanced digital competency, including in areas where best practices and industry standards are still developing.[12] However, most procurement organisations lack such expertise, as a reflection of broader digital skills shortages across the public sector,[13] with recent reports placing civil service vacancies for data and tech roles throughout the civil service alone close to 4,000.[14] This not only reduces the practical value of the Guidelines to facilitate responsible AI procurement by inexperienced buyers with limited capabilities, but also highlights the role of the CCS AI Framework for AI adoption in the public sector.

09. The CCS AI Framework creates a procurement vehicle[15] to facilitate public buyers’ access to digital capabilities. CCS’ description for public buyers stresses that ‘If you are new to AI you will be able to procure services through a discovery phase, to get an understanding of AI and how it can benefit your organisation.’[16] The Framework thus seeks to enable contracting authorities, especially those lacking in-house expertise, to carry out AI procurement with the support of external providers. While this can foster the uptake of AI in the public sector in the short term, it is highly unlikely to result in adequate governance of AI procurement, as this approach focuses at most on the initial stages of AI adoption but can hardly be sustainable throughout the lifecycle of AI use in the public sector—and, crucially, would leave the enforcement of contractualised AI governance obligations in a particularly weak position (thus failing to meet the enforcement requirement at para 04). Moreover, it would generate a series of governance shortcomings which avoidance requires an alternative approach.

D. Governance Shortcomings

10. Despite claims to the contrary in the National AI Strategy (above para 05), the approach currently followed by the Government does not empower public buyers to responsibly procure AI. The Guidelines are not susceptible of operationalisation by inexperienced public buyers with limited digital capabilities (above paras 06-08). At the same time, the Guidelines are too generic to support sophisticated approaches by more advanced digital buyers. The Guidelines do not reduce the uncertainty and complexity of procuring AI and do not include any guidance on eg how to design public contracts to perform the regulatory functions expected under the ‘AI regulation by contract’ approach.[17] This is despite existing recommendations on eg the development of ‘model contracts and framework agreements for public sector procurement to incorporate a set of minimum standards around ethical use of AI, with particular focus on expected levels transparency and explainability, and ongoing testing for fairness’.[18] The guidelines thus fail to address the first requirement for effective regulation by contract in relation to clarifying the relevant obligations (para 04).

11. The CCS Framework would also fail to ensure the development of public sector capacity to establish, monitor, and enforce AI governance obligations (para 04). Perhaps counterintuitively, the CCS AI Framework can generate a further disempowerment of public buyers seeking to rely on external capabilities to support AI adoption. There is evidence that reliance on outside providers and consultants to cover immediate needs further erodes public sector capability in the long term,[19] as well as creating risks of technical and intellectual debt in the deployment of AI solutions as consultants come and go and there is no capture of institutional knowledge and memory.[20] This can also exacerbate current trends of pilot AI graveyard spirals, where most projects do not reach full deployment, at least in part due to insufficient digital capabilities beyond the (outsourced) pilot phase. This tends to result in self-reinforcing institutional weaknesses that can limit the public sector’s ability to drive digitalisation, not least because technical debt quickly becomes a significant barrier.[21] It also runs counter to best practices towards building public sector digital maturity,[22] and to the growing consensus that public sector digitalisation first and foremost requires a prioritised investment in building up in-house capabilities.[23] On this point, it is important to note the large size of the CCS AI Framework, which was initially pre-advertised with a £90 mn value,[24] but this was then revised to £200 mn over 42 months.[25] Procuring AI consultancy services under the Framework can thus facilitate the funnelling of significant amounts of public funds to the private sector, rather than using those funds to build in-house capabilities. It can result in multiple public buyers entering contracts for the same expertise, which thus duplicates costs, as well as in a cumulative lack of institutional learning by the public sector because of atomised and uncoordinated contractual relationships.

12. Beyond the issue of institutional dependency on external capabilities, the cumulative effect of the Guidelines and the Framework would be to outsource the role of ‘AI regulation by contract’ to unaccountable private providers that can then introduce their own biases on the substantive and procedural obligations to be embedded in the relevant contracts—which would ultimately negate the effectiveness of the regulatory approach as a public interest safeguard. The lack of accountability of external providers would not only result from the weakness (or absolute inability) of the public buyer to control their activities and challenge important decisions—eg on data governance, or algorithmic impact assessments, as above (paras 06-07)—but also from the potential absence of effective and timely external checks. Market mechanisms are unlikely to deliver adequate checks due market concentration and structural conflicts of interest affecting both providers that sometimes provide consultancy services and other times are involved in the development and deployment of AI solutions,[26] as well as a result of insufficiently effective safeguards on conflicts of interest resulting from quickly revolving doors. Equally, broader governance controls are unlikely to be facilitated by flanking initiatives, such as the pilot algorithmic transparency standard.

13. To try to foster accountability in the adoption of AI by the public sector, the UK is currently piloting an algorithmic transparency standard.[27] While the initial six examples of algorithmic disclosures published by the Government provide some details on emerging AI use cases and the data and types of algorithms used by publishing organisations, and while this information could in principle foster accountability, there are two primary shortcomings. First, completing the documentation requires resources and, in some respects, advanced digital capabilities. Organisations participating in the pilot are being supported by the Government, which makes it difficult to assess to what extent public buyers would generally be able to adequately prepare the documentation on their own. Moreover, the documentation also refers to some underlying requirements, such as algorithmic impact assessments, that are not yet standardised (para 07). In that, the pilot standard replicates the same shortcomings discussed above in relation to the Guidelines. Algorithmic disclosure will thus only be done by entities with high capabilities, or it will be outsourced to consultants (thus reducing the scope for the revelation of governance-relevant information).

14. Second, compliance with the standard is not mandatory—at least while the pilot is developed. If compliance with the algorithmic transparency standard remains voluntary, there are clear governance risks. It is easy to see how precisely the most problematic uses may not be the object of adequate disclosures under a voluntary self-reporting mechanism. More generally, even if the standard was made mandatory, it would be necessary to implement an external quality control mechanism to mitigate problems with the quality of self-reported disclosures that are pervasive in other areas of information-based governance.[28] Whether the Central Digital and Data Office (currently in charge of the pilot) would have capacity (and powers) to do so remains unclear, and it would in any case lack independence.

15. Finally, it should be stressed that the current approach to transparency disclosure following the adoption of AI (ex post) can be problematic where the implementation of the AI is difficult to undo and/or the effects of malicious or risky AI are high stakes or impossible to revert. It is also problematic in that the current approach places the burden of scrutiny and accountability outside the public sector, rather than establishing internal, preventative (ex ante) controls on the deployment of AI technologies that could potentially be very harmful for fundamental and individual socio-economic rights—as evidenced by the inclusion of some fields of application of AI in the public sector as ‘high risk’ in the EU’s proposed EU AI Act.[29] Given the particular risks that AI deployment in the public sector poses to fundamental and individual rights, the minimalistic and reactive approach outlined in the AI Regulation Policy Paper is inadequate.

E. Conclusion: An Alternative Approach

16. Ensuring that the adoption of AI in the public sector operates in the public interest and for the benefit of all citizens will require new legislation supported by a new mechanism of external oversight and enforcement. New legislation is required to impose specific minimum requirements of eg data governance and algorithmic impact assessment and related transparency across the public sector. Such legislation would then need to be developed in statutory guidance of a much more detailed and actionable nature than the current Guidelines. These developed requirements can then be embedded into public contracts by reference. Without such clarification of the relevant substantive obligations, the approach to ‘AI regulation by contract’ can hardly be effective other than in exceptional cases.

17. Legislation would also be necessary to create an independent authority—eg an ‘AI in the Public Sector Authority’ (AIPSA)—with powers to enforce those minimum requirements across the public sector. AIPSA is necessary, as oversight of the use of AI in the public sector does not currently fall within the scope of any specific sectoral regulator and the general regulators (such as the Information Commissioner’s Office) lack procurement-specific knowledge. Moreover, units within Cabinet Office (such as the Office for AI or the Central Digital and Data Office) lack the required independence.

18. It would also be necessary to develop a clear and sustainably funded strategy to build in-house capability in the public sector, including clear policies on the minimisation of expenditure directed at the engagement of external consultants and the development of guidance on how to ensure the capture and retention of the knowledge developed within outsourced projects (including, but not only, through detailed technical documentation).

19. Until sufficient in-house capability is built to ensure adequate understanding and ability to manage digital procurement governance requirements independently, the current reactive approach should be abandoned, and AIPSA should have to approve all projects to develop, procure and deploy AI in the public sector to ensure that they meet the required legislative safeguards in terms of data governance, impact assessment, etc. This approach could progressively be relaxed through eg block exemption mechanisms, once there is sufficiently detailed understanding and guidance on specific AI use cases and/or in relation to public sector entities that could demonstrate sufficient in-house capability, eg through a mechanism of independent certification.

20. The new legislation and statutory guidance would need to be self-standing, as the Procurement Bill would not provide the required governance improvements. First, the Procurement Bill pays limited to no attention to artificial intelligence and the digitalisation of procurement.[30] An amendment (46) that would have created minimum requirements on automated decision-making and data ethics was not moved at the Lords Committee stage, and it seems unlikely to be taken up again at later stages of the legislative process. Second, even if the Procurement Bill created minimum substantive requirements, it would lack adequate enforcement mechanisms, not least due to the limited powers and lack of independence of the foreseen Procurement Review Unit (to also sit within Cabinet Office).

_______________________________________
Note: all websites last accessed on 25 October 2022.

[1] Department for Digital, Culture, Media and Sport, Establishing a pro-innovation approach to regulating AI. An overview of the UK’s emerging approach (CP 728, 2022).

[2] Ada Lovelace Institute, AI Now Institute and Open Government Partnership, Algorithmic Accountability for the Public Sector (August 2021) 33.

[3] Committee on Standards in Public Life, Intelligence and Public Standards (2020) 51.

[4] Department for Digital, Culture, Media and Sport, National AI Strategy (CP 525, 2021) 47.

[5] AI Dynamic Purchasing System < https://www.crowncommercial.gov.uk/agreements/RM6200 >.

[6] Office for Artificial Intelligence, Guidelines for AI Procurement (2020) < https://www.gov.uk/government/publications/guidelines-for-ai-procurement/guidelines-for-ai-procurement >.

[7] Central Digital and Data Office, Data Ethics Framework (Guidance) (2020) < https://www.gov.uk/government/publications/data-ethics-framework >.

[8] Central Digital and Data Office, A guide to using artificial intelligence in the public sector (2019) < https://www.gov.uk/government/collections/a-guide-to-using-artificial-intelligence-in-the-public-sector >.

[9] See eg < https://datahazards.com/index.html >.

[10] Ada Lovelace Institute, Algorithmic impact assessment: a case study in healthcare (2022) < https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ >.

[11] A Sanchez-Graells, ‘Algorithmic Transparency: Some Thoughts On UK's First Four Published Disclosures and the Standards’ Usability’ (2022) < https://www.howtocrackanut.com/blog/2022/7/11/algorithmic-transparency-some-thoughts-on-uk-first-disclosures-and-usability >.

[12] A Sanchez-Graells, ‘“Experimental” WEF/UK Guidelines for AI Procurement: Some Comments’ (2019) < https://www.howtocrackanut.com/blog/2019/9/25/wef-guidelines-for-ai-procurement-and-uk-pilot-some-comments >.

[13] See eg Public Accounts Committee, Challenges in implementing digital change (HC 2021-22, 637).

[14] S Klovig Skelton, ‘Public sector aims to close digital skills gap with private sector’ (Computer Weekly, 4 Oct 2022) < https://www.computerweekly.com/news/252525692/Public-sector-aims-to-close-digital-skills-gap-with-private-sector >.

[15] It is a dynamic purchasing system, or a list of pre-screened potential vendors public buyers can use to carry out their own simplified mini-competitions for the award of AI-related contracts.

[16] Above (n 5).

[17] This contrasts with eg the EU project to develop standard contractual clauses for the procurement of AI by public organisations. See < https://living-in.eu/groups/solutions/ai-procurement >.

[18] Centre for Data Ethics and Innovation, Review into bias in algorithmic decision-making (2020) < https://www.gov.uk/government/publications/cdei-publishes-review-into-bias-in-algorithmic-decision-making/main-report-cdei-review-into-bias-in-algorithmic-decision-making >.

[19] V Weghmann and K Sankey, Hollowed out: The growing impact of consultancies in public administrations (2022) < https://www.epsu.org/sites/default/files/article/files/EPSU%20Report%20Outsourcing%20state_EN.pdf >.

[20] A Sanchez-Graells, ‘Identifying Emerging Risks in Digital Procurement Governance’ in idem, Digital Technologies and Public Procurement. Gatekeeping and experimentation in digital public governance (OUP, forthcoming) < https://ssrn.com/abstract=4254931 >.

[21] M E Nielsen and C Østergaard Madsen, ‘Stakeholder influence on technical debt management in the public sector: An embedded case study’ (2022) 39 Government Information Quarterly 101706.

[22] See eg Kevin C Desouza, ‘Artificial Intelligence in the Public Sector: A Maturity Model’ (2021) IBM Centre for the Business of Government < https://www.businessofgovernment.org/report/artificial-intelligence-public-sector-maturity-model >.

[23] A Clarke and S Boots, A Guide to Reforming Information Technology Procurement in the Government of Canada (2022) < https://govcanadacontracts.ca/it-procurement-guide/ >.

[24] < https://ted.europa.eu/udl?uri=TED:NOTICE:600328-2019:HTML:EN:HTML&tabId=1&tabLang=en >.

[25] < https://ted.europa.eu/udl?uri=TED:NOTICE:373610-2020:HTML:EN:HTML&tabId=1&tabLang=en >.

[26] See S Boots, ‘“Charbonneau Loops” and government IT contracting’ (2022) < https://sboots.ca/2022/10/12/charbonneau-loops-and-government-it-contracting/ >.

[27] Central Digital and Data Office, Algorithmic Transparency Standard (2022) < https://www.gov.uk/government/collections/algorithmic-transparency-standard >.

[28] Eg in the context of financial markets, there have been notorious ongoing problems with ensuring adequate quality in corporate and investor disclosures.

[29] < https://artificialintelligenceact.eu/ >.

[30] P Telles, ‘The lack of automation ideas in the UK Gov Green Paper on procurement reform’ (2021) < http://www.telles.eu/blog/2021/1/13/the-lack-of-automation-ideas-in-the-uk-gov-green-paper-on-procurement-reform >.