Where does the proposed EU AI Act place procurement?

Thinking about some of the issues raised in the earlier post ‘Can the robot procure for you?,’ I have now taken a close look at the European Commission’s Proposal for an Artificial Intelligence Act (AIA) to see how it approaches the use of AI in procurement procedures. It may (not) come as a surprise that the AI Act takes an extremely light-touch approach to the regulation of AI uses in procurement and simply subjects them to (yet to be developed) voluntary codes of conduct. I will detail my analysis of why this is the case in this post, as well as some reasons why I do not find it satisfactory.

Before getting to the details, it is worth stressing that this is reflective of a broader feature of the AIA: its heavy private sector orientation. When it comes to AI uses by the public sector, other than prohibiting some massive surveillance by the State (both for law enforcement and to generate a system of social scoring) and classifying as high-risk the most obvious AI uses by the law enforcement and judicial authorities (all of which are important, of course), the AIA remains silent on the use of AI in most administrative procedures, with the only exception of those concerning social benefits.

This approach could be generally justified by the limits to EU competence and, in particular, those derived from the principle of administrative self-organisation of the Member States. However, given the very broad approach taken by the Commission on the interpretation and use of Article 114 TFEU (which is the legal basis for the AIA, more below), this is not entirely consistent. It could rather be that the specific uses of AI by the public sector covered in the proposal reflect the increasingly well-known problematic uses of (biased) AI solutions in narrow aspects of public sector activity, rather than a broader reflection on the (still unknown, or still unimplemented) uses that could be problematic.

While the AIA is ‘future-proofed’ by including criteria for the inclusion of further use cases in its ‘high-risk’ category (which determines the bulk of compliance obligations), it is difficult to see how those criteria are suited to a significant expansion of the regulatory constraints to AI uses by the public sector, including in procurement. Therefore, as a broader point, I submit that the proposed AIA needs some revision to make it more suited to the potential deployment of AI by the public sector. To reflect on that, I am co-organising a webinar on ’Digitalization and AI decision-making in administrative law proceedings’, which will take place on 15 Nov 2021, 1pm UK (save the date, registration and more details here). All welcome.

Background on the AIA

Summarising the AIA is both difficult and has already been done (see eg this quick explainer of the Centre for Data Innovation, and for an accessible overview of the rationale and regulatory architecture of the AIA, this master class by Prof Christiane Wendehorst). So, I will just highlight here a few issues linked to the analysis of procurement’s position within its regulatory framework.

The AIA seeks to establish a proportionate approach to the regulation of AI deployment and use. While its primary concern is with the consolidation of the EU Single Digital Market and the avoidance of regulatory barriers to the circulation of AI solutions, its preamble also points to the need to ensure the effectiveness of EU values and, crucially, the fundamental rights in the Charter of Fundamental Rights of the EU.

Importantly for the purposes of our discussion, recital (28) AIA stresses that ‘The extent of the adverse impact caused by the AI system on the fundamental rights protected by the Charter is of particular relevance when classifying an AI system as high-risk. Those rights include ... right to an effective remedy and to a fair trial [Art 47 Charter] … [and] right to good administration {Art 41 Charter]’.

The AIA seeks to create such a proportionate approach to the regulation of AI by establishing four categories of AI uses: prohibited, high-risk, limited risk requiring transparency measures, and minimal risk. The two categories that carry regulatory constraints or compliance obligations are those concerning high-risk (Arts 8-15 AIA), and limited risk requiring transparency measures (Art 52 AIA, which also applies to some high-risk AI). Minimal risk AI uses are left unregulated, although the AIA (Art 69) seeks to promote the development of codes of conduct intended to foster voluntary compliance with the requirements applicable to high-risk AI systems.

Procurement within the AIA

Procurement AI practices could not be classified as prohibited uses (Art 5 AIA), except in the difficult to imagine circumstances in which they deployed subliminal techniques. It is also difficult to see how they could fall under the regime applicable to uses requiring special transparency (Art 52) because it only applies to AI systems intended to interact with natural persons, which must be ‘designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use.’ It would not be difficult for public buyers using external-facing AI solutions (eg chatbots seeking to guide tenderers through their e-submissions) to make it clear that the tenderers are interacting with an AI solution. And, even if not, the transparency obligations are rather minimal.

So, the crux of the issue rests on whether procurement-related AI uses could be classified as high-risk. This is regulated in Art 6 AIA, which cross-refers to Annex III AIA. The Annex contains a numerus clausus of high-risk AI uses, which is however susceptible of amendment under the conditions specified in Art 7 AIA. Art 6/Annex III do not contain any procurement-related AI uses. The only type of AI use linked to administrative procedures concerns ‘AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for public assistance benefits and services, as well as to grant, reduce, revoke, or reclaim such benefits and services’ (Annex III(5)(a) AIA).

Clearly, then, procurement-related AI uses are currently left to the default category of those with minimal risk and, thus, subjected only to voluntary self-regulation via codes of conduct.

Could this change in the future?

Art 7 AIA establishes the following two cumulative criteria: (a) the AI systems are intended to be used in any of the areas listed in points 1 to 8 of Annex III; and (b) the AI systems pose a risk of harm to the health and safety, or a risk of adverse impact on fundamental rights, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.

The first hurdle in getting procurement-related AI uses included in Annex III in the future is formal and concerns the interpretation of the categories listed therein. There are only two potential options: nesting them under uses related to ‘Access to and enjoyment of essential private services and public services and benefits’, or uses related to ‘Administration of justice and democratic processes’. It could (theoretically) be possible to squeeze them in one of them (perhaps the latter easier than the former), but this is by no means straightforward and, given the existing AI uses in each of the two categories, I would personally be disinclined to engage in such broad interpretation.

Even if that hurdle was cleared, the second hurdle is also challenging. Art 7(2) AIA establishes the criteria to assess that an AI use poses a sufficient ‘risk of adverse impact on fundamental rights’. Of those criteria, there are three that in my view would make it very difficult to classify procurement-related AI uses as high-risk. Those criteria require the European Commission to consider:

(c) the extent to which the use of an AI system has already caused … adverse impact on the fundamental rights or has given rise to significant concerns in relation to the materialisation of such … adverse impact, as demonstrated by reports or documented allegations submitted to national competent authorities;

(d) the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect a plurality of persons;

(e) the extent to which potentially harmed or adversely impacted persons are dependent on the outcome produced with an AI system, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome;

(g) the extent to which the outcome produced with an AI system is easily reversible …;

Meeting these criteria would require for the relevant AI systems to basically be making independent or fully automated decisions (eg on award of contract, or exclusion of tenderers), so that their decisions would be seen to affect the effectiveness of Art 41 and 47 Charter rights; as well as a (practical) understanding that those decisions cannot be easily reversed. Otherwise, the regulatory threshold is so high that most likely procurement-related AI uses (screening, recommender systems, support to human decision-making (eg automated evaluation of tenders), etc) are unlikely to be considered to pose a sufficient ‘risk of adverse impact on fundamental rights’.

Could Member States go further?

As mentioned above, one of the potential explanations for the almost absolute silence on the use of AI in administrative procedures in the AIA could be that the Commission considers that this aspect of AI regulation belongs to each of the Member States. If that was true, then Member States could further than the code of conduct self-regulatory approach resulting from the AIA regulatory architecture. An easy approach would be to eg legally mandate compliance with the AIA obligations for high-risk AI systems.

However, given the internal market justification of the AIA, to be honest, I have my doubts that such a regulatory intervention would withstand challenges on the basis of general EU internal market law.

The thrust of the AIA competential justification (under Art 114 TFEU, see point 2.1 of the Explanatory memorandum) is that

The primary objective of this proposal is to ensure the proper functioning of the internal market by setting harmonised rules in particular on the development, placing on the Union market and the use of products and services making use of AI technologies or provided as stand-alone AI systems. Some Member States are already considering national rules to ensure that AI is safe and is developed and used in compliance with fundamental rights obligations. This will likely lead to two main problems: i) a fragmentation of the internal market on essential elements regarding in particular the requirements for the AI products and services, their marketing, their use, the liability and the supervision by public authorities, and ii) the substantial diminishment of legal certainty for both providers and users of AI systems on how existing and new rules will apply to those systems in the Union.

All of those issues would arise if each Member State adopted its own rules constraining the use of AI for administrative procedures not covered by the AIA (either related to procurement or not), so the challenge to that decentralised approach on grounds of internal market law by eg providers of procurement-related AI solutions capable of deployment in all Member States but burdened with uneven regulatory requirements seems quite straightforward (if controversial), especially given the high level of homogeneity in public procurement regulation resulting from the 2014 Public Procurement Package. Not to mention the possibility of challenging those domestic obligation on grounds that they go further than the AIA in breach of Art 16 Charter (freedom to conduct a business), even if this could face some issues resulting from the interpretation of Art 51 thereof.

Repositioning procurement (and other aspects of administrative law) in the AIA

In my view, there is a case to be made for the repositioning of procurement-related AI uses within the AIA, and its logic can apply to other areas of administrative law/activity with similar market effects.

The key issue is that the development of AI solutions to support decision-making in the public sector not only concerns the rights of those directly involved or affected by those decisions, but also society at large. In the case of procurement, eg the development of biased procurement evaluation or procurement recommender systems can have negative social effects via its effects on the market (eg on value for money, to mention the most obvious) that are difficult to identify in single tender procurement decisions.

Moreover, it seems that the public administration is well-placed to comply with the requirements of the AIA for high-risk AI systems as a matter of routine procedure, and the arguments on the need to take a proportionate approach to the regulation of AI so as not to stifle innovation lose steam and barely have any punch when it comes to imposing them on the public sector user. Further, to a large extent, the AIA requirements seem to me mostly aligned with the requirements for running a proper (and challenge proof) eProcurement system, and they would also facilitate compliance with duties of good administration when specific decisions are challenged.

Therefore, on balance, I see no good reason not to expand the list in Annex III AIA to include the use of AI systems in all administrative procedures, and in particular in public procurement and in other regulatory sectors where ex post interventions to correct market distortions resulting from biased AI implementations can simply be practically impossible. I submit that this should be done before its adoption.

Can the robot procure for you? -- a short reflection a propos Chesterman (2021)

origin.jpg

I am reading the very interesting new book by Simon Chesterman, We, the Robots? Regulating Artificial Intelligence and the Limits of the Law (Cambridge University Press, 2021). One of the thought-provoking issues the book addresses is the non-delegability of inherently governmental functions to artificial intelligence (AI). And one of the regulatory analogies used in the book concerns the limits to outsourcing as regulated by procurement law (see pages 109 ff).

The book argues that, for ‘certain decisions, it is necessary to have a human “in-the-loop” actively participating in those decisions’ (109), and states that to reach such determination, a ‘useful analogy is limits on government outsourcing to third parties’ (110).

In that regard, Chesterman leads us to consider the US approach to establishing ‘inherently governmental’ functions for the purposes of outsourcing, on which there is a very detailed and useful Office of Federal Procurement Policy (OFPP) Policy Letter 11–01, Performance of Inherently Governmental and Critical Functions.

I was curious to see whether procurement itself was considered an inherently governmental function not susceptible of outsourcing, and was glad to find this very nuanced specific treatment of the issue (I can hear my US colleagues and friends laughing at my previous ignorance).

Inherently governmental.png

It seems thus arguable that, in the US context and unless the decisions of an AI are somehow re-attributed to a Federal Government employee by some legal fiction, some aspects of procurement decision-making (at contract formation phase, but not only) cannot (yet?) be delegated to an AI (or fully automated. let’s say).

Which prompts me to reflect on what would be the treatment under EU law—and under different Member States’ approaches to constraints based on public functions and the exercise of public powers. This may be the seed for a research paper — or perhaps just a follow-on blogpost — but I would be very interested in any thoughts or comments, particularly if this is an issue someone has already thought or published about! As always, feedback and engagement most welcome at a.sanchez-graells@bristol.ac.uk.