Did you use AI to write this tender? What? Just asking! -- Also, how will you use AI to deliver this contract?

The UK’s Cabinet Office has published procurement policy note 2/24 on ‘Improving Transparency of AI use in Procurement’ (the ‘AI PPN’) because ‘AI systems, tools and products are part of a rapidly growing and evolving market, and as such, there may be increased risks associated with their adoption … [and therefore] it is essential to take steps to identify and manage associated risks and opportunities, as part of the Government’s commercial activities’.

The crucial risk the AI PPN seems to be concerned with relates to generative AI ‘hallucinations’, as it includes background information highlighting that:

‘Content created with the support of Large Language Models (LLMs) may include inaccurate or misleading statements; where statements, facts or references appear plausible, but are in fact false. LLMs are trained to predict a “statistically plausible” string of text, however statistical plausibility does not necessarily mean that the statements are factually accurate. As LLMs do not have a contextual understanding of the question they are being asked, or the answer they are proposing, they are unable to identify or correct any errors they make in their response. Care must be taken both in the use of LLMs, and in assessing returns that have used LLMs, in the form of additional due diligence.’

The PPN has the main advantage of trying to tackle the challenge of generative AI in procurement head on. It can help raise awareness in case someone was not yet talking about this and, more seriously, it includes an Annex A that brings together the several different bits of guidance issued by the UK government to date. However, the AI PPN does not elaborate on any of that guidance and is thus as limited as the Guidelines for AI procurement (see here), relatively complicated in that it points to rather different types of guidance ranging from ethics, to legal, to practical considerations, and requires significant knowledge and expertise to be operationalised (see here). Perhaps the best evidence of the complexity of the mushrooming sets of guidance is that the PPN itself includes in Annex A a reference to the January 2024 Guidance to civil servants on use of generative AI, which has been superseded by the Generative AI Framework for HMG, to which it also refers in Annex A. In other words, the AI PPN is not a ‘plug-and-play’ document setting out how to go about dealing with AI hallucinations and other risks in procurement. And given the pace of change in this area, it is also bound to be a PPN that requires multiple revisions and adaptations going forward.

A screenshot showing that the January guidance on generative AI use has been superseded (taken on 26 March 2024 10:20am).

More generally, the AI PPN is bound to be controversial and has already spurred insightful discussion on LinkedIn. I would recommend the posts by Kieran McGaughey and Ian Makgill. I offer some additional thoughts here and look forward to continuing the conversation.

In my view, one of the potential issues arising from the AI PPN is that it aims to cover quite a few different aspects of AI in procurement, as well as neglecting others. Slightly simplifying, there are three broad areas of AI-procurement interaction. First, there is the issue of buying AI-based solutions or services. Second, there is the issue of tenderers using (generative) AI to write or design their tenders. Third, there is the issue of the use of AI by contracting authorities, eg in relation to qualitative selection/exclusion, or evaluation/award decisions. The AI PPN covers aspects of . However, it is not clear to me that these can be treated together, as they pose significantly different policy issues. I will try to disentangle them here.

Buying and using AI

Although it mainly cross-refers to the Guidelines for AI procurement, the AI PPN includes some content relevant to the procurement and use of AI when it stresses that ‘Commercial teams should take note of existing guidance when purchasing AI services, however they should also be aware that AI and Machine Learning is becoming increasingly prevalent in the delivery of “non-AI” services. Where AI is likely to be used in the delivery of a service, commercial teams may wish to require suppliers to declare this, and provide further details. This will enable commercial teams to consider any additional due diligence or contractual amendments to manage the impact of AI as part of the service delivery.’ This is an adequate and potentially helpful warning. However, as discussed below, the PPN suggests a way to go about it that is in my view wrong and potentially very problematic.

AI-generated tenders

The AI PPN is however mostly concerned with the use of AI for tender generation. It recognises that there ‘are potential benefits to suppliers using AI to develop their bids, enabling them to bid for a greater number of public contracts. It is important to note that suppliers’ use of AI is not prohibited during the commercial process but steps should be taken to understand the risks associated with the use of AI tools in this context, as would be the case if a bid writer has been used by the bidder.’ It indicates some potential steps contracting authorities can take, such as:

  • ‘Asking suppliers to disclose their use of AI in the creation of their tender.’

  • ‘Undertaking appropriate and proportionate due diligence:

    • If suppliers use AI tools to create tender responses, additional due diligence may be required to ensure suppliers have the appropriate capacity and capability to fulfil the requirements of the contract. Such due diligence should be proportionate to any additional specific risk posed by the use of AI, and could include site visits, clarification questions or supplier presentations.

    • Additional due diligence should help to establish the accuracy, robustness and credibility of suppliers’ tenders through the use of clarifications or requesting additional supporting documentation in the same way contracting authorities would approach any uncertainty or ambiguity in tenders.’

  • ‘Potentially allowing more time in the procurement to allow for due diligence and an increase in volumes of responses.’

  • ‘Closer alignment with internal customers and delivery teams to bring greater expertise on the implications and benefits of AI, relative to the subject matter of the contract.’

In my view, there are a few problematic aspects here. While the AI PPN seems to try not to single out the use of generative AI as potentially problematic by equating it to the possible use of (human) bid writers, this is unconvincing. First, because there is (to my knowledge) no guidance whatsoever on an assessment of whether bid writers have been used, and because the AI PPN itself does not require disclosure of the engagement of bid writers (o puts any thought on the fact that third-party bid writers ma have used AI without this being known to the hiring tenderer, which would then require an extension of the disclosure of AI use further down the tender generation chain). Second, because the approach taken in the AI PP seems to point at potential problems with the use of (external, third-party) bid writers, whereas it does not seem to object to the use of (in-house) bid writers, potentially by much larger economic operators, which seems to presumptively not generate issues. Third, and most importantly, because it shows that perhaps not enough has been done so far to tackle the potential deceit or provision of misleading information in tenders if contracting authorities must now start thinking about how to get expert-based analysis of tenders, or develop fact-checking mechanisms to ensure bids are truthful. You would have thought that regardless of the origin of a tender, contracting authorities should be able to check their content to an adequate level of due diligence already.

In any case, the biggest issue with the AI PPN is how it suggests contracting authorities should deal with this issue, as discussed below.

AI-based assessments

The AI PPN also suggests that contracting authorities should be ‘Planning for a general increase in activity as suppliers may use AI to streamline or automate their processes and improve their bid writing capability and capacity leading to an increase in clarification questions and tender responses.’ One of the possibilities could be for contracting authorities to ‘fight fire with fire’ and also deploy generative AI (eg to make summaries, to scan for errors, etc). Interestingly, though, the AI PPN does not directly refer to the potential use of (generative) AI by contracting authorities.

While it includes a reference in Annex A to the Generative AI framework for HM Government, that document does not specifically address the use of generative AI to manage procurement processes (and what it says about buying generative AI is redundant given the other guidance in the Annex). In my view, the generative AI framework pushes strongly against the use of AI in procurement when it identifies a series of use cases to avoid (page 18) that include contexts where high-accuracy and high-explainability are required. If this is the government’s (justified) view, then the AI PPN has been a missed opportunity to say this more clearly and directly.

The broader issue of confidential, classified or proprietary information

Both in relation to the procurement and use of AI, and the use of AI for tender generation, the AI PPN stresses that it may be necessary:

  • ‘Putting in place proportionate controls to ensure bidders do not use confidential contracting authority information, or information not already in the public domain as training data for AI systems e.g. using confidential Government tender documents to train AI or Large Language Models to create future tender responses.‘; and that

  • ‘In certain procurements where there are national security concerns in relation to use of AI by suppliers, there may be additional considerations and risk mitigations that are required. In such instances, commercial teams should engage with their Information Assurance and Security colleagues, before launching the procurement, to ensure proportionate risk mitigations are implemented.’

These are issues that can easily exceed the technical capabilities of most contracting authorities. It is very hard to know what data has been used to train a model and economic operators using ‘off-the-shelf’ generative AI solutions will hardly be in a position to assess themselves, or provide any meaningful information, to contracting authorities. While there can be contractual constraints on the use of information and data generated under a given contract, it is much more challenging to assess whether information and data has been inappropriately used at a different link of increasingly complex digital supply chains. And, in any case, this is not only an issue for future contracts. Data and information generated under contracts already in place may not be subject to adequate data governance frameworks. It would seem that a more muscular approach to auditing data governance issues may be required, and that this should not be devolved to the procurement function.

How to deal with it? — or where the PPN goes wrong

The biggest weakness in the AI PPN is in how it suggests contracting authorities should deal with the issue of generative AI. In my view, it gets it wrong in two different ways. First, by asking for too much non-scored information where contracting authorities are unlikely to be able to act on it without breaching procurement and good administration principles. Second, by asking for too little non-scored information that contracting authorities are under a duty to score.

Too much information

The AI PPN includes two potential (alternative) disclosure questions in relation to the use of generative AI in tender writing (see below Q1 and Q2).

I think these questions miss the mark and expose contracting authorities to risks of challenge on grounds of a potential breach of the principle of equal treatment and the duty of good administration. The potential breach of the duty of good administration could be on grounds that the contracting authority is taking irrelevant information into account in the assessment of the relevant tender. The potential breach of equal treatment could come if tenders with some AI-generated elements were subjected to significantly more scrutiny than tenders where no AI was used. Contracting authorities should subject all tenders to the same level of due diligence and scrutiny because, at the bottom of it, there is no reason to ‘take a tenderer at its word’ when no AI is used. That is the entire logic of the exclusion, qualitative selection and evaluation processes.

Crucially, though, what the questions seem to really seek to ascertain is that the tenderer has checked for and confirms the accuracy of the content of the tender and thus makes the content its own and takes responsibility for it. This could be checked generally by asking all tenderers to confirm that the content of their tenders is correct and a true reflection of their capabilities and intended contractual delivery, reminding them that contracting authorities have tools to sanction economic operators that have ‘negligently provided misleading information that may have a material influence on decisions concerning exclusion, selection or award’ (reg.57(8)(i)(ii) PCR2015 and sch.7 13(2)(b) PA2023). And then enforcing them!

Checking the ‘authenticity’ of tenders when in fact contracting authorities are meant to check their truthfulness, accuracy and deliverability would be a false substitution of the relevant duties. It would also potentially eschew the incentives to disclose use of AI generation (lest contracting authorities find a reliable way of identifying it themselves and start applying the exclusion grounds above)—as thoroughly discussed in the LinkedIn posts referred to above.

too little information

Conversely, the PPN takes too soft and potentially confusing an approach to the use of AI to deliver the contract. The proposed disclosure question (Q3) is very problematic. It presents as ‘for information only’ a request for information on the use of AI or machine learning in the context of the actual delivery of the contract. This is information that will either relate to the technical specifications, award criteria or performance clauses (or all of them) and there is no meaningful way in which AI could be used to deliver the contract without this having an impact on the assessment and evaluation of the tender. The question is potentially misleading not only because of the indication that the information would not be scored, but also because it suggests that the use of AI in the delivery of a service or product is within the discretion of the tenderers. In my view, this would only be possible if the technical specifications were rather loosely written in performance terms, which would then require a very thorough description and assessment of how that performance is to be achieved. Moreover, the use of AI would probably require a set of organisational arrangements that should also not go unnoticed or unchecked in the procurement process. Moreover, one of the main challenges may not be in the use of AI in new contracts (were tenderers are likely to highlight it to stress the advantages, or to justify that their tenders are not abnormally low in comparison with delivery through ‘manual’ solutions), but in relation to pre-existing contracts. It also seems that a broader policy, recommendation and audit of the use of generative AI for the delivery of existing contracts and its treatment as a (permissible??) contract modification would have been needed.

Final thought

The AI PPN is an interesting development and will help crystallise many discussions that were somehow hovering in the background. However, a significant rethink is needed and, in my view, much more detailed guidance is needed in relation to the different dimensions of the interaction between AI and procurement. There are important questions that remain unaddressed and, in my view, one of the most pressing ones concerns the balance between general regulation and the use of procurement to regulate AI use. While the UK government remains committed to its ‘pro-innovation’ approach and no general regulation of AI use is put in place, in particular in relation to public sector AI use, procurement will continue to struggle and fail to act as a regulator of the technology.

The principle of competition is dead. Long live the principle of competition (recording)

Here is the recording for the first part of today’s seminar ‘The principle of competition is dead. Long live the principle of competition’, with renewed thanks to Roberto Caranta, Trygve Harlem Losnedahl and Dagne Sabockis for sharing their great insights. A transcript is available here, as well as the slides I used. As always, comments welcome and more than happy to continue the discussion!

Transposing Directives no longer so discretionary! The Court of Justice forces transposition of discretionary exclusion grounds and hints at ‘intra-State’ vertical direct effect (C‑66/22)

** This comment was first published as an Op-Ed for EU Law Live on 8 December 2022 (see formatted version). I am reposting it here in case of broader interest. **

On the face of it, in Infraestruturas de Portugal and Futrifer Indústrias Ferroviárias (C-66/22), the Court of Justice had to assess whether Member States can limit the exclusion of competition law violators from participation in tenders for public contracts to cases where the national competition authority has previously imposed such debarment as an ancillary penalty. While this is a plausible transposition approach that seeks to centralise competition law analysis under the control of the specialist administrative authority, it can also reduce the effectiveness of procurement mechanisms seeking to preserve (much needed) competition for public contracts. It is thus fair enough to test the boundaries of the discretion that contracting authorities can retain in this context. However, in Infraestruturas, the Court of Justice did two other things that are potentially significant beyond the narrower field of procurement governance. First, the Court reversed its previous case law and established that Member States are (no longer) allowed not to transpose discretionary exclusion grounds. This says something (but I am not too sure what) about the more general level of discretion that Member States retain in the transposition of (prescriptively worded) Directives into their national systems under Art 288 TFEU. Second, the Court furthered a line of reasoning that comes to assign ‘individual rights’ to contracting authorities—that is, entities within the public sector—in what could seem like the creation of an ‘intra-State’ modality of vertical direct effect. In this Op-Ed, I try to make some sense of these two developments and leave aside for now the details of the interpretation of the specific grounds for the exclusion of economic operators under Directive 2014/24/EU.

No longer discretionary to transpose discretionary exclusion grounds

It is trite EU law that, under Art 288 TFEU, Member States retain discretion in the choice of form and methods for the transposition of a Directive. While Directives can be prescriptive about their aims and goals and, sometimes, about specific modes of protection of the relevant legal interest, there seems to be a (theoretical) agreement that Directives still (must) leave a margin of discretion to Member States—else, they surreptitiously shapeshift into Regulations. Such discretion would seem to cover in particular those elements of a Directive that are explicitly labelled as discretionary. This ‘orthodoxy’ seemed to be straightforwardly applied by the Court of Justice in the analysis of the constraints on the transposition of the Public Procurement Directive 2014/24.

The Public Procurement Directive contains a set of rules on the exclusion from tenders for public contracts of economic operators that have fallen short of their legal obligations. In Art. 57, in addition to setting rules applicable to all exclusion decisions, the Directive distinguishes between, on the one hand, mandatory exclusion grounds that require contracting authorities to exclude economic operators convicted by final judgment for one of a series of breaches (Art. 57(1)) and, on the other hand, discretionary exclusion grounds that allow contracting authorities to exclude the affected economic operators (Art. 57(4)). Member States are explicitly allowed to turn discretionary exclusion grounds mandatory under their transposing legislation. Conversely, until now, the Court had been clear that Member States were allowed not to transpose discretionary exclusion grounds. So far, so good.

In Infraestruturas, however, the Court of Justice U-turned. It stated that:

… the first subparagraph of Article 57(4) of Directive 2014/24 … states that ‘contracting authorities may exclude or may be required by Member States to exclude any economic operator from participation in a procurement procedure’ in any of the situations referred to in points (a) to (i) of that provision.

In that connection, it admittedly follows from certain judgments of the Court … that the Member States can decide whether or not to transpose the facultative grounds for exclusion referred to in that provision. The Court has in fact held that … the Member States are free not to apply the facultative grounds for exclusion set out in that directive or to incorporate them into national law with varying degrees of rigour according to legal, economic or social considerations prevailing at national level (see, to that effect, judgments of 19 June 2019, Meca, C‑41/18, EU:C:2019:507, paragraph 33; of 30 January 2020, Tim, C‑395/18, EU:C:2020:58, paragraphs 34 and 40; and of 3 June 2021, Rad Service and Others, C‑210/20, EU:C:2021:445, paragraph 28).

However, an analysis of the wording of the first subparagraph of Article 57(4) of Directive 2014/24, the context into which that provision fits, and the aim that the latter pursues within the framework of that directive, shows that contrary to what is apparent from those judgments, the Member States are under the obligation to transpose that provision into their national law (C-66/22, paras. 48-50, emphases added).

In my view, this U-turn challenges the ‘orthodoxy’ to the extent that the Court subjects the margin of discretion left to the Member States by the EU legislators to the Court’s assessment of whether what is clearly labelled as discretionary—and was as such treated in earlier case law—is permissibly left to the discretion of the Member States in view of the aims of the Directive. I think that this introduces a potentially tricky line of challenge of the content of EU Directives on the grounds that the EU legislators could not have left to the Member States’ discretion specific aspects of their content without undermining the goals of the Directive itself. This can ultimately constrain the upstream discretion in the choice of legal instrument under Art 288 TFEU by the EU legislators themselves, and further erode the distinction between Regulations and Directives if the content of the Directives can in fact eventually be binding in their entirety and directly applicable in all Member States. Further, this U-turn is based on a rather peculiar interpretation of the wording of the Public Procurement Directive that comes to assign ‘individual rights’ to the public sector. Given this peculiarity, I am not too sure whether the deviation from the orthodoxy in Infraestruturas indicates a significant shift by the Court of Justice or ‘just’ an exception or oddity that may confirm the general rule.

‘Intra-State’ vertical direct effect?

In justifying its U-turn, the Court of Justice stresses that, under Art. 57(4) of the Public Procurement Directive:

the choice as to the decision whether or not to exclude an economic operator from a public procurement procedure on one of the grounds set out in that provision falls to the contracting authority, unless the Member States decide to transform that option to exclude into an obligation to do so. Accordingly, the Member States must transpose that provision either by allowing or by requiring contracting authorities to apply the exclusion grounds laid down by the latter provision. … a Member State cannot omit those grounds from its national legislation transposing Directive 2014/24 and thus deprive contracting authorities of the possibility – which must, at the very least, be conferred on them by virtue of that provision – of applying those grounds.

… it should be noted that recital 101 of that directive states that ‘contracting authorities should … be given the possibility to exclude economic operators which have proven unreliable’. That recital thus confirms that a Member State must transpose that provision in order not to deprive contracting authorities of the possibility referred to in the preceding paragraph and that recital.

Lastly, as to the objective pursued by Directive 2014/24 in so far as concerns the facultative grounds for exclusion, the Court has acknowledged that that objective is reflected in the emphasis placed on the powers of contracting authorities. Thus the EU legislature intended to confer on the contracting authority, and on it alone, the task of assessing whether a candidate or tenderer must be excluded from a procurement procedure during the stage of selecting the tenderers (see, to that effect, judgments of 19 June 2019, Meca, C‑41/18, EU:C:2019:507, paragraph 34, and of 3 October 2019, Delta Antrepriză de Construcţii şi Montaj 93, C‑267/18, EU:C:2019:826, paragraph 25).

The option, or indeed obligation, for the contracting authority to apply the exclusion grounds set out in the first subparagraph of Article 57(4) of Directive 2014/24 is specifically intended to enable it to assess the integrity and reliability of each of the economic operators participating in a public procurement procedure.

The EU legislature thus intended to ensure that contracting authorities have, in all Member States, the possibility of excluding economic operators who are regarded as unreliable by those authorities (C-66/22, paras. 51-52 and 55-57, emphases added).

Even if not altogether new—see Meca (C-41/18) and Delta (C-267/18)—I find this line of reasoning puzzling. The way the Court of Justice has interpreted Art. 57(4) of the Public Procurement Directive equates to an ‘individual right’ for contracting authorities not to contract with economic operators they deem unreliable and, crucially, this is an ‘individual right’ that Member States cannot deprive them from. The protection of such right implies an ‘intra-State’ modality of vertical direct effect—at least to the extent that, after Infraestruturas, a contracting authority of any Member State with centralised exclusion decision-making can challenge any constraints on its administrative discretion and simply set aside the domestic rules and directly rely on the Directive to proceed to exclusion on the basis of discretionary grounds.

To my mind, this line of reasoning extracts the wrong implications from the wording of the Directive because of the quasi-anthropomorphism of contracting authorities. Given that the Directive conceptualises contracting authorities as the relevant unit of decision-making, references to contracting authorities should be seen as references to decisions within a procurement procedure, not as references to agents that derive rights independently from—or even against—the structure of the State into which they are embedded. In the end, contracting authorities are defined as ‘the State, regional or local authorities, bodies governed by public law or associations formed by one or more such authorities or one or more such bodies governed by public law’ (Art. 2(1)(1) Directive 2014/24). A functional interpretation of the wording of Article 57(4) of the Public Procurement Directive would recognise that the meaning of ‘contracting authorities may exclude or may be required by Member States to exclude’ is that ‘in a covered procurement procedure, it is permissible to exclude, and it can be made mandatory to exclude’—which would then straightforwardly follow the orthodoxy in allowing Member States to exercise the discretion on the form and method of transposition of that possibility.

I submit that the Court of Justice has followed a line of reasoning that is also problematic in relation to other provisions of the Public Procurement Directive, in particular in relation to the potential effects it could have in ‘empowering’ contracting authorities to take courses of action (eg international collaboration) that could imply domestic ultra vires.

Final thoughts

What I find most confusing in this part of the Infraestruturas Judgment is that the Court could have found much less disruptive and confusing ways to reach the same conclusion. For example, it could have found that, in empowering the national competition authority to make decisions on the exclusion of tenderers through the imposition of ancillary penalties, Portugal had decided to transpose the relevant discretionary exclusion ground, but done so incorrectly or defectively by simultaneously transposing the ground but limiting the discretion of the contracting authority. I would still find issue with that approach, but at least it would be easier to reconcile with important parts of the orthodoxy of fundamental aspects of EU law.

Will the ECJ mandate protectionism in procurement -- comments on AG Collins' Kolin Opinion (C-652/22)

In the Opinion in Kolin Inşaat Turizm Sanayi ve Ticaret (C-652/22, EU:C:2024:212, hereafter ‘Kolin’), Advocate General Collins has argued that only economic operators established in countries party to international agreements on public contracts that bind the EU may rely on the provisions of Directive 2014/25/EU. This would imply that economic operators established in other countries are not entitled to participate in a public contract award procedure governed by Directive 2014/25/EU and, consequently, are unable to rely on the provisions of that Directive before Member State courts. In my view, this interpretation is both protectionist and problematic. The ECJ should not follow it. In this blog I sketch the reasons for this view.

Limited (international law) obligation of equal treatment does not imply a general (EU) obligation to exclude or discriminate

The Kolin Opinion concerns the interpretation of Art 43 of Directive 2014/25/EU, which could be relevant to the interpretation of Art 25 of Directive 2014/24/EU (on which see A La Chimia, ‘Article 25’ in R Caranta and A Sanchez-Graells (eds), European Public Procurement. Commentary on Directive 2014/24/EU (Edward Elgar 2021) 274-286. Art 43 of Dir 2014/25/EU establishes that:

In so far as they are covered by Annexes 3, 4 and 5 and the General Notes to the European Union’s Appendix I to the GPA and by the other international agreements by which the Union is bound, contracting entities within the meaning of Article 4(1)(a) shall accord to the works, supplies, services and economic operators of the signatories to those agreements treatment no less favourable than the treatment accorded to the works, supplies, services and economic operators of the Union.

AG Collins considers that

It follows that economic operators from non-covered third-countries do not fall within the scope ratione personae of Directive 2014/25 ... Since the applicant is not entitled to participate in a procedure for the award of a public contract governed by Directive 2014/25, it cannot seek to rely on the provisions thereof before a Member State court. The referring court therefore cannot obtain a response to a reference for a preliminary ruling on the interpretation of those provisions, since any answer that the Court might give to its request would not have binding effect. That reason suffices to justify a finding that this reference for a preliminary ruling is inadmissible (para 33, emphases added).

This position implies a logical jump in the reasoning. While it is plain that only economic operators from covered third-countries have a legally enforceable right to participate in public tenders on equal terms, it is by no means clear that other economic operators must necessarily be excluded from participation. If that was the plain interpretation and implication of that provision (and Art 25 Dir 2014/24/EU), the Commission would not have needed to develop the International Procurement Instrument (IPI) to establish circumstances in which exclusion of economic operators from non-covered third countries can be mandated. Along the same lines, AG Rantos argued in the Opinion in CRRC Qingdao Sifang and Others (C-266/22, EU:C:2023:399, not available in English) that ‘Member States can grant less favourable treatment to economic operators from non-covered third parties’ (para 65, own translation from Spanish).

In fact, as the Opinion reflects, ‘[a]lmost all of the parties to the procedure before the Court take the view that Member States may regulate the participation of economic operators from third-countries in procedures for the award of public contracts’ (para 35). In particular, the Croatian government submitted ‘that EU law contains no general prohibition on the participation of economic operators from third-countries in procedures for the award of public contracts in the European Union’ and provided sound arguments in support of that, as the ‘Commission’s Guidance on the participation of third-country bidders confirms that proposition where it states that economic operators from third-countries may be excluded from these procedures, without requiring their exclusion’ (para 36). Those arguments are also aligned with AG Rantos’ CRRC Opinion (paras 72-74). Estonia also submitted there is no obligation under EU law to limit participation by economic operators from non-covered third parties (para 38). Denmark, France, and Austria also considered that there is no ban stemming from EU law, even if the Union has exclusive competence in relation to the common commercial policy (paras 39-40). This should have given the AG pause.

Instead, as suggested by the Commission, AG Collins seeks to support the Opinion’s logical jump in an additional legal argument based on the remit of the EU’s competence to regulate the participation of economic operators from third-countries in procurement procedures in the European Union. The key issue here is not whether the EU has an exclusive or a shared competence in procurement, but that AG Collins considers that

by adopting Article 43 of Directive 2014/25, the European Union has exercised its competence in relation to economic operators established in a country party to the GPA or to another international agreement on the award of public contracts by which the European Union is bound. … economic operators established in Türkiye do not come within that category. Although the European Union has not exercised its exclusive competence to establish whether economic operators from non-covered third-countries may participate in such procedures, Member States may not rely on that fact in order to regain competence to act in that area (para 50, emphases added).

Since the European Union does not appear to have exercised its exclusive competence to determine access by economic operators from non-covered third-countries to procedures for the award of public contracts, Member States wishing to take steps to that end may inform the competent EU institutions of their proposed course of action with a view to obtaining the requisite authorisation. Nothing in the Court’s file indicates that Croatia has taken such a step. Second, unilateral action by Member States could undermine the European Union’s bargaining position in the context of its efforts to open, on a reciprocal basis, markets for public contracts in third countries. Third, it could interfere with the uniform application of EU law, since in such circumstances the application of Directive 2014/25 ratione personae could vary from one Member State to another. (para 52, emphases added).

In my view, AG Collins is conflating normative and positive analysis. It is clear that dissimilar approaches in the Member States undermine the Commission’s bargaining position—thus the need to bring in the IPI as well as other instruments such as the Foreign Subsidies Regulation (FSR)—and can lead an absence of uniformity in the application of the Directive. However, these are normative issues. From a positive standpoint, it is in my view incorrect to state that the EU has exercised its competence in relation to GPA and other covered third country operators through Article 43 of Directive 2014/25/EU or, for that matter, Article 25 of Directive 2014/24/EU. The exercise of the relevant competence concerns the entering into those international treaties, not the internal legislative measures put in place to promote their effectiveness.

To me, it is clear that the obligation to grant equal treatment to GPA and other covered economic operators stems from the GPA or the relevant international treaty, not the Directive. Art 43 Dir 2014/25/EU (and Art 25 Dir 2014/24/EU are mere reminders of the obligations under international law and cannot alter them. Their omission would not have made any difference in covered third country economic operators’ legal position. By the same token, their inclusion cannot serve to prejudice the position of non-covered third country economic operators. As above, the whole process leading to the IPI and FSR would have been entirely superfluous. In my view, the Kolin Opinion follows too closely the dangerously protectionist policy approach pushed by the Commission, and does so in a way that is not legally persuasive (or accurate, in my view).

‘Tolerance’ of third country economic operators’ participation must engage legal protection under the CFR

Moreover, the Kolin Opinion would open a very dangerous path in terms of rule of law and upholding the effectiveness of the Charter of Fundamental Rights, especially Articles 41 and 47—and allow contracting authorities two bites of the cherry in relation to tenders submitted by economic operators from non-covered third countries. Contracting authorities could ‘tolerate’ participation of non-covered third country economic operators to see if those are the ones providing the most economically advantageous offer and, if not, or if other (industrial policy) considerations kicked in, they could simply reject or set aside the tender. This would happen in a context of insufficient guarantees.

Even assuming there was an obligation to exclude under Art 43 of Directive 2014/28/EU or Art 25 of Directive 2014/24/EU, which there is not, contracting authorities would be bound by the duties under Art 41 CFR in relation to EU and covered third-country economic operators. The relevant duty would require an immediate exclusion of the not covered economic operators to protect the (in that case) participation rights of those covered. A contracting authority that had not carried out such exclusion could seek to benefit from the advantages provided by the third country economic operator in breach of its duties, which is not permissible.

Conversely, a contracting authority that had not discharged its duty to exclude would be allowed to still benefit from its inaction by discriminating against and eventually excluding at a later stage the tender of the third-country economic operator without the latter having legal recourse. This would also not be in line with the effectiveness of Arts 41 and 47 CFR and certainly not in line with the doctrine of legitimate expectations. Once an economic operator or tenderer is not excluded or rejected at the first opportunity, there is a positive and very specific representation made by the contracting authority that the economic operator and/or its tender is in the run for the contract. This must trigger legal protection—although the specific form is likely to depend on domestic administrative law.

In the case at hand, like in many other cases in daily practice, despite Kolin not being eligible for equal treatment under Art 43 of Directive 2014/24/EU—and thus not having an enforceable right to participate in the tender and to equal treatment within it deriving from international law—the contracting authority had ‘tolerated’ its participation. The Opinion is plain that, following the receipt of tenders, the contracting authority ‘concluded that 6 out of the 15 tenders submitted fulfilled the selection criteria. [Kolin], a company established in Türkiye, submitted one of the tenders selected’ (para 16). However, the Opinion does not grant Kolin any rights because of such tolerance.

Contrary to the view held by AG Collins, ‘The Austrian Government contends that, although, in principle, Directive 2014/25 does not apply to economic operators from non-covered third-countries, such operators may rely on that directive once a contracting authority has permitted their participation in a procedure for the award of a public contract award’ (para 26). I share this view. Crucially, this is not an issue the Opinion explicitly addresses. But this is the main reason why the ECJ should not follow the Opinion.

Initial UK guidance on pro-innovation AI regulation: Much ado about nothing?

The UK Government’s Department for Science, Innovation and Technology (DSIT) has recently published its Initial Guidance for Regulators on Implementing the UK’s AI Regulatory Principles (Feb 2024) (the ‘AI guidance’). This follows from the Government’s response to the public consultation on its ‘pro-innovation approach’ to AI regulation (see here).

The AI guidance is meant to support regulators develop tailored guidance for the implementation of the five principles underpinning the pro-innovation approach to AI regulation, that is: (i) Safety, security & robustness; (ii) Appropriate transparency and explainability; (iii) Fairness;
(iv) Accountability and governance; and (v) Contestability and redress.

Voluntary approach and timeline for implementation

A first, perhaps, surprising element of the AI guidance comes from the way in which engagement with the principles by current regulators is framed as voluntary. The white paper describing the pro-innovation approach to AI regulation (the ‘AI white paper’) had indicated that, initially, ‘the principles will be issued on a non-statutory basis and implemented by existing regulators’, with a clear expectation for regulators to make use their ‘domain-specific expertise to tailor the implementation of the principles to the specific context in which AI is used’.

The AI white paper made it clear that a failure by regulators to implement the principles would lead the government to introduce ‘a statutory duty on regulators requiring them to have due regard to the principles’, which would still ‘allow regulators the flexibility to exercise judgement when applying the principles in particular contexts, while also strengthening their mandate to implement them’. There seemed to be little room for discretion for regulators to decide whether to engage with the principles, even if they were expected to exercise discretion on how to implement them.

By contrast, the initial AI guidance indicates that it ‘is not intended to be a prescriptive guide on implementation as the principles are voluntary and how they are considered is ultimately at regulators’ discretion’. There is also a clear indication in the response to the public consultation that the introduction of a statutory duty is not in the immediate legislative horizon and the absence of a pre-determined date for the assessment of whether the principles have been ‘sufficiently implemented’ on a voluntary basis (for example, in two years’ time) will make it very hard to press for such legislative proposal (depending on the policy direction of the Government at the time).

This seems to follow from the Government’s position that ‘acknowledge[s] concerns from respondents that rushing the implementation of a duty to regard could cause disruption to responsible AI innovation. We will not rush to legislate’. At the same time, however, the response to the public consultation indicates that DSIT has asked a number of regulators to publish by 30 April 2024 updates on their strategic approaches to AI. This seems to create an expectation that regulators will in fact engage—or have defined plans for engaging—with the principles in the very short term. How this does not create a ‘rush to implement’ and how putting the duty to consider the principles on a statutory footing would alter any of this is hard to fathom, though.

An iterative, phased approach

The very tentative approach to the issuing of guidance is also clear in the fact that the Government is taking an iterative, phased approach to the production of AI regulation guidance, with three phases foreseen. A phase one consisting of the publication of the AI guidance in Feb 2024, a phase two comprising an iteration and development of the guidance in summer of 2024, and a phase three (with no timeline) involving further developments in cooperation with regulators—to eg ‘encourage multi-regulator guidance’. Given the short time between phases one and two, some questions arise as to how much practical experience will be accumulated in the coming 4-6 months and whether there is much value in the high-level guidance provided in phase one, as it only goes slightly beyond the tentative steer included in the AI white paper—which already contained some indication of ‘factors that government believes regulators may wish to consider when providing guidance/implementing each principle’ (Annex A).

Indeed, the AI guidance is still rather high-level and it does not provide much substantive interpretation of what the different principles mean. It is very much a ‘how to develop guidance’ document, rather than a document setting out core considerations and requirements for regulators to embed within their respective remits. A significant part of the document provides guidance on ‘interpreting and applying the AI regulatory framework’ (pp 7-12) but this is really ‘meta-guidance’ on issues such as potential collaboration between regulators for the issuance of joint guidance/tools, or an encouragement to benchmarking and the avoidance of duplicated guidance where relevant. General recommendations such as the value of publishing the guidance and keeping it updated seem superfluous in a context where the regulatory approach is premised on ‘the expertise of [UK] world class regulators’.

The core of the AI guidance is limited to the section on ‘applying individual principles’ (pp 13-22), which sets out a series of questions to consider in relation to each of the five principles. The guidance offers no answers and very limited steer for their formulation, which is entirely left to regulators. We will probably have to wait (at least) for the summer iteration to get some more detail of what substantive requirements relate to each of the principles. However, the AI guidance already contains some issues worthy of careful consideration, in particular in relation to the tunnelling of regulatory power and the imbalanced approach to the different principles that follows from its reliance on existing (and soon to emerge) technical standards.

technical standards and interpretation of the regulatory principles

regulatory tunnelling

As we said in our response to the public consultation on the AI white paper,

The principles-based approach to AI regulation suggested in the AI [white paper] is undeliverable, not only due to lack of detail on the meaning and regulatory implications of each of the principles, but also due to barriers to translation into enforceable requirements, and tensions with existing regulatory frameworks. The AI [white paper] indicates in Annex A that each regulator should consider issuing guidance on the interpretation of the principles within its regulatory remit, and suggests that in doing so they may want to rely on emerging technical standards (such as ISO or IEEE standards). This presumes both the adequacy of those standards and their sufficiency to translate general principles into operationalizable and enforceable requirements. This is by no means straightforward, and it is hard to see how regulators with significantly limited capabilities … can undertake that task effectively. There is a clear risk that regulators may simply rely on emerging industry-led standards. However, it has already been pointed out that this creates a privatisation of AI regulation and generates significant implicit risks (at para 27).

The AI guidance, in sticking to the same approach, confirms this risk of regulatory tunnelling. The guidance encourages regulators to explicitly and directly refer to technical standards ‘to support AI developers and AI deployers’—while at the same time stressing that ‘this guidance is not an endorsement of any specific standard. It is for regulators to consider standards and their suitability in a given situation (and/or encourage those they regulate to do so likewise).’ This does not seem to be the best approach. Leaving it to each of the regulators to assess the suitability of existing (and emerging) standards creates duplication of effort, as well as a risk of conflicting views and guidance. It would seem that it is precisely the role of centralised AI guidance to carry out that assessment and filter out technical standards that are aligned with the overarching regulatory principles for implementation by sectoral regulators. In failing to do that and pushing the responsibility down to each regulator, the AI guidance comes to abdicate responsibility for the provision of meaningful policy implementation guidelines.

Additionally, the strong steer to rely on references to technical standards creates an almost default position for regulators to follow—especially those with less capability to scrutinise the implications of those standards and to formulate complementary or alternative approaches in their guidance. It can be expected that regulators will tend to refer to those technical standards in their guidance and to take them as the baseline or starting point. This effectively transfers regulatory power to the standard setting organisations and further dilutes the regulatory approach followed in the UK, which in fact will be limited to industry self-regulation despite the appearance of regulatory intervention and oversight.

unbalanced approach

The second implication of this approach is that some principles are likely to be more developed than other in regulatory guidance, as they also are in the initial AI guidance. The series of questions and considerations are more developed in relation to principles for which there are technical standards—ie ‘safety, security & robustness’, and ‘accountability and governance’—and to some aspects of other principles for which there are standards. For example, in relation to ‘adequate transparency and explainability’, there is more of an emphasis on explainability than on transparency and there is no indication of how to gauge ‘adequacy’ in relation to either of them. Given that transparency, in the sense of publication of details on AI use, raises a few difficult questions on the interaction with freedom of information legislation and the protection of trade secrets, the passing reference to the algorithmic transparency recording standard will not be sufficient to support regulators in developing nuanced and pragmatic approaches.

Similarly, in relation to ‘fairness’, the AI guidance solely provides some reference in relation to AI ethics and bias, and in both cases in relation to existing standards. The document falls awfully short of any meaningful consideration of the implications and requirements of the (arguably) most important principle in AI regulation. The AI guidance solely indicates that

Tools and guidance could also consider relevant law, regulation, technical standards and assurance techniques. These should be applied and interpreted similarly by different regulators where possible. For example, regulators need to consider their responsibilities under the 2010 Equality Act and the 1998 Human Rights Act. Regulators may also need to understand how AI might exacerbate vulnerabilities or create new ones and provide tools and guidance accordingly.

This is unhelpful in many ways. First, ensuring that AI development and deployment complies with existing law and regulation should not be presented as a possibility, but as an absolute minimum requirement. Second, the duties of the regulators under the EA 2010 and HRA 1998 are likely to play a very small role here. What is crucial is to ensure that the development and use of the AI is compliant with them, especially where the use is by public sector entities (for which there is no general regulator—and in relation to which a passing reference to the EHRC guidance on AI use in the public sector will not be sufficient to support regulators in developing nuanced and pragmatic approaches). In failing to explicitly acknowledge the existence of approaches to the assessment of AI and algorithmic impacts on fundamental and human rights, the guidance creates obfuscation by omission.

‘Contestability and redress’ is the most underdeveloped principle in the AI guidance, perhaps because no technical standard addresses this issue.

final thoughts

In my view, the AI guidance does little to support regulators, especially those with less capability and resources, in their (voluntary? short-term?) task of issuing guidance in their respective remits. Meaningful AI guidance needs to provide much clearer explanations of what is expected and required for the correct implementation of the five regulatory principles. It needs to address in a centralised and unified manner the assessment of existing and emerging technical standards against the regulatory benchmark. It also needs to synthesise the multiple guidance documents issued (and to be issued) by regulators—which it currently simply lists in Annex 1—to avoid a multiplication of the effort required to assess their (in)comptability and duplications. By leaving all these tasks to the regulators, the AI guidance (and the centralised function from which it originates) does little to nothing to move the regulatory needle beyond industry-led self-regulation and fails to discharge regulators from the burden of issuing AI guidance.

High hopes but little movement for public sector AI use regulation through procurement in the UK Government's 'Pro-innovation Approach' response

The UK Government has recently published its official response (the ‘response’) to the public consultation of March 2023 on its ‘pro-innovation approach’ to AI regulation (for an initial discussion, see here). The response shows very little movement from the original approach and proposals and, despite claiming that significant developments have already taken place, it mainly provides a governmental self-congratulatory narrative and limited high-level details of a regulatory architecture still very much ‘under construction’. The publication of the response was coupled with that of Initial Guidance for Regulators on Implementing the UK’s AI Regulatory Principles (Feb 2024), which I will comment in a subsequent post.

A section of particular interest in the response refers to ‘Ensuring AI best practice in the public sector’ (at 21-22), which makes direct reference to the use of public procurement and the exercise of public sector buying power as a regulatory lever.

This section describes some measures being put in place or planned to seize ‘the opportunities presented by AI to deliver better public services including health, education, and transport’, such as:

  • tripling the number of technical AI engineers and developers within the Cabinet Office to create a new AI Incubator for the government’ (para 41).
    This is an interesting commitment to building in-house capability. It would however be interesting to know whether these are new or reassigned roles, as well as how the process of recruitment and retention is faring, given the massive difficulties evidenced in the recent analysis by the National Audit Office, Digital transformation in government: addressing the barriers to efficiency (10 Mar 2023, HC 2022-23, 1171).

  • The government is also using the procurement power of the public sector to drive responsible and safe AI innovation. The Central Digital and Data Office (CDDO) has published guidance on the procurement and use of generative AI for the UK government. Later this year, DSIT will launch the AI Management Essentials scheme, setting a minimum good practice standard for companies selling AI products and services. We will consult on introducing this as a mandatory requirement for public sector procurement, using purchasing power to drive responsible innovation in the broader economy’ (para 43).
    This is also an interesting aspiration, for several reasons. First, the GenAI guidance is very generic and solely highlights pre-existing (also very generic) guidance on how to carry out procurement of AI (see screenshot below). This can hardly be seen as a meaningful development of the existing regulatory framework. Second, the announcement of an ‘AI Management Essentials’ scheme seems to be mirrored on the ‘Cyber Essentials’ scheme in the area of cyber security, despite significant differences and the much higher level of complexity that can be expected from an ‘all-encompassing’ scheme for the management of the myriad risks generated by the deployment of AI.

Screenshot of the webpage https://www.gov.uk/government/publications/generative-ai-framework-for-hmg/generative-ai-framework-for-hmg-html (accessed 22 February 2024), where this information is available in accessible format.

  • This builds on the Algorithmic Transparency Recording Standard (ATRS), which established a standardised way for public sector organisations to proactively publish information about how and why they are using algorithmic methods in decision-making. Following a successful pilot of the standard, and publication of an approved cross-government version last year, we will now be making use of the ATRS a requirement for all government departments and plan to expand this across the broader public sector over time’ (para 44).
    This is also interesting in that the ‘success’ attributed to the development of the ATRS is very clearly undermined by the almost absolute lack of use other than in relation to the pilot projects (see screenshot below). It is also interesting that the ATRS allows public sector AI deployers to fill in but not publish the relevant documents, as a form of self-reflective/evaluative exercise. I wonder how many publications we will see in the coming months, even if ‘use of the ATRS’ becomes a requirement.

Screenshot of the list of published transparency disclosures at Algorithmic Transparency Reports - GOV.UK (www.gov.uk) (accessed 22 February 2024), where this information is available in accessible format.

Overall, I think the response to the ‘pro-innovation’ AI regulation consultation does little to back up the high expectations being placed in public procurement as a mechanism of regulation by contract. I will update the analysis in this UK-focused paper on the use of procurement to regulate public sector AI use before final publication, but there will be little change. The broader analysis in my recent monograph also remains applicable (phew): Digital Technologies and Public Procurement. Gatekeeping and experimentation in digital public governance (OUP 2024).

The principle of competition is dead. Long live the principle of competition (Free webinar)

Free webinar: 22 March 2024 *revised time* 1pm UK / 2pm CET / 3pm EET. Registration here.

The role of competition in public procurement regulation continues to be debated. While it is generally accepted that the proper functioning of procurement markets requires some level of competition – and the European Court of Auditors has recently pointed out that current levels of competition for public contracts in the EU are not satisfactory – the 'legal ranking' and normative weight of competition concerns are much less settled.

This has been evidenced in a recent wave of academic discussion on whether there is a general principle of competition at all in Directive 2014/24/EU, what is its normative status and how it ranks vis-à-vis sustainability and environmental considerations, and what are its practical implications for the interpretation and application of EU public procurement law.

Bringing together voices representing a wide range of views, this webinar will explore these issues and provide a space for reflective discussion on competition and public procurement. The webinar won't settle the debate, but hopefully it will allow us to take stock and outline thoughts for the next wave of discussion. It will also provide an opportunity for an interactive Q&A.

Speakers:

  • Prof Roberto Caranta, Full Professor of Administrative Law, University of Turin.

  • Mr Trygve Harlem Losnedahl, PhD researcher, University of Oslo.

  • Dr Dagne Sabockis, Senior Associate, Vinge law firm; Stockholm School of Economics.

  • Prof Albert Sanchez-Graells, Professor of Economic Law, University of Bristol.

Pre- or post-reading:

Centralised procurement for the health care sector -- bang for your pound or siphoning off scarce resources?

The National Health Service (NHS) has been running a centralised model for health care procurement in England for a few years now. The current system resulted from a redesign of the NHS supply chain that has been operational since 2019 [for details, see A Sanchez-Graells, ‘Centralisation of procurement and supply chain management in the English NHS: some governance and compliance challenges’ (2019) 70(1) NILQ 53-75.]

Given that the main driver for the implementation and redesign of the system was to obtain efficiencies (aka savings) through the exercise of the NHS’ buying power, both the UK’s National Audit Office (NAO) and the House of Commons’ Public Accounts Committee (PAC) are scrutinising the operation of the system in its first few years.

The NAO published a scathing report on 12 January 2024. Among many other concerning issues, the report highlighted how, despite the fundamental importance of measuring savings, ‘NHS Supply Chain has used different methods to report savings to different audiences, which could cause confusion.’ This triggered a clear risk of recounting (ie exaggeration) of claims of savings, as detailed below.

In my submission of written evidence to the PAC Inquiry ‘NHS Supply Chain and efficiencies in procurement’, I look in detail at the potential implications of the use of different savings reporting methods for the (mis)management of scarce NHS resources, should the recounting of savings have allowed private subcontractors to also overclaim savings in order to boost the financial return under their contracts. The full text of my submission is reproduced below, in case of interest.

nao’s findings on recounting of savings

There are three crucial findings in the NAO’s report concerning the use of different (and potentially problematic) savings reporting methods. They are as follows:

DHSC [the Department of Health and Social Care] set Supply Chain a cumulative target of making £2.4 billion savings by 2023-24. Supply Chain told us that it had exceeded this target by the end of 2022-23 although we have not validated this saving. The method for calculating this re-counted savings from each year since 2015-16. Supply Chain calculated its reported savings against the £2.4 billion target by using 2015-16 prices as its baseline. Even if prices had not reduced in any year compared with the year before, a saving was reported as long as prices were lower than that of the baseline year. This method then accumulated savings each year, by adding the difference in price as at the baseline year, for each year. This accumulation continued to re-count savings made in earlier years and did not take inflation into account. For example, if a product cost £10 in 2015-16 and reduced to £9 in 2016-17, Supply Chain would report a saving of £1. If it remained at £9 in 2017-18, Supply Chain would report a total saving of £2 (re-counting the £1 saved in 2016-17). If it then reduced to £8 in 2018-19, Supply Chain would report a total saving of £4 (re-counting the £1 saved in each of 2016-17 and 2017-18 and saving a further £2 in 2018-19) […]. DHSC could not provide us with any original sign-off or agreement that this was how Supply Chain should calculate its savings figure (para 2.4, emphasis added).

Supply Chain has used other methods for calculating savings which could cause confusion. It has used different methods for different audiences, for example, to government, trusts and suppliers (see Figure 5). When reporting progress against its £2.4 billion target it used a baseline from 2015-16 and accumulated the amount each year. To help show the savings that trusts have made individually, it also calculates in-year savings each trust has made using prices paid the previous year as the baseline. In this example, if a trust paid £10 for an item in 2015-16, and then procured it for £9 from Supply Chain in 2016-17 and 2017-18, Supply Chain would report a saving of £1 in the first year and no saving in the second year. These different methods have evolved since Supply Chain was established and there is a rationale for each. Having several methods to calculate savings has the potential to cause confusion (para 2.6, emphasis added).

When I read the report, I thought that the difference between the methods was not only problematic in itself, but also showed that the ‘main method’ for NHS Supply Chain and government to claim savings, in allowing recounting of savings, was likely to have allowed for excessive claims. This is not only a technical or political problem, but also a clear risk of siphoning off NHS scarce budgetary resources, for the reasons detailed below.

Submission to the pac inquiry

00. This brief written submission responds to the call for evidence issued by the Public Accounts Committee in relation to its Inquiry “NHS Supply Chain and efficiencies in procurement”. It focuses on the specific point of ‘Progress in delivering savings for the NHS’. This submission provides further details on the structure and functioning of NHS Supply Chain than those included in the National Audit Office’s report “NHS Supply Chain and efficiencies in procurement” (2023-24, HC 390). The purpose of this further detail is to highlight the broader implications that the potential overclaim of savings generated by NHS Supply Chain may have had in relation to payments made to private providers to whom some of the supply chain functions have been outsourced. It raises some questions that the Committee may want to explore in the context of its Inquiry.

1. NHS Supply Chain operating structure

01. The NAO report analyses the functioning and performance of NHS Supply Chain and SCCL in a holistic manner and without considering details of the complex structure of outsourced functions that underpins the model. This can obscure some of the practical impacts of some of NAO’s findings, in particular in relation with the potential overclaim of savings generated by NHS Supply Chain (paras 2.4, 2.6 and Figure 5 in the report). Approaching the analysis at a deeper level of detail on NHS Supply Chain’s operating structure can shed light on problems with the methods for calculating NHS Supply Chain savings other than the confusion caused by the use of multiple methods, and the potential overclaim of savings in relation to the original target set by DHSC.

02. NHS Supply Chain does not operate as a single entity and SCCL is not the only relevant actor in the operating structure.[1] Crucially, the operating model consists of a complex network of outsourcing contracts around what are called ‘category towers’ of products and services. SCCL coordinates a series of ‘Category Tower Service Providers’ (CTSPs), as listed in the graph below. CTSPs have an active role in developing category management strategies (that is, the ‘go to market approach’ at product level) and heavily influence the procurement strategy for the relevant category, subject to SCCL approval.

03. CTSPs are incentivised to reduce total cost in the system, not just reduce unit prices of the goods and services covered by the relevant category. They hold Guaranteed Maximum Price Target Cost (GMPTC) contracts, under which CTSPs will be paid the operational costs incurred in performing the services against an annual target set out in the contract, but will only make a profit when savings are delivered, on a gainshare basis that is capped.

Source: NHS Supply Chain - New operating model (2018).[2]

04. There are very limited public details on how the relevant targets for financial services have been set and managed throughout the operation of the system. However, it is clear that CTSPs have financial incentives tied to the generation of savings for SCCL. Given that SCCL does not carry out procurement activities without CTSP involvement, it seems plausible that SCCL’s own targets and claimed savings would (primarily) have been the result of the simple aggregation of those of CTSPs. If that is correct, the issues identified in the NAO report may have resulted in financial advantages to CTSPs if they have been allowed to overclaim savings generated.

05. NHS Supply Chain has publicly stated that[3]:

  • ‘Savings are contractual to the CTSPs. As part of the procurement, bidders were asked to provide contractual savings targets for each year. These were assessed and challenged through the process and are core to the commercial model. CTSPs cannot attain their target margins (i.e. profit) unless they are able to achieve contractual savings.’

  • ‘The CTSPs financial reward mechanism [is] based upon a gain share from the delivery of savings. The model includes savings generated across the total system, not just the price of the product. The level of gain share is directly proportional to the level of savings delivered.’

06. In view of this, if CTSPs had been allowed to use a method of savings calculation that re-counted savings in the way NAO details at para 2.4 of its report, it is likely that their financial compensation will have been higher than it should have been under alternative models of savings calculation that did not allow for such re-count. Given the volumes of savings claimed through the period covered by the report, any potential overcompensation could have been significant. As any such overcompensation would have been covered by NHS funding, the Committee may want to include its consideration within its Inquiry and in its evidence-gathering efforts.

__________________________________

[1] For a detailed account, see A Sanchez-Graells, “Centralisation of procurement and supply chain management in the English NHS: some governance and compliance challenges” (2019) 70(1) Northern Ireland Legal Quarterly 53-75.

[2] Available at https://wwwmedia.supplychain.nhs.uk/media/Customer_FAQ_November_2018.pdf (last accessed 12 January 2024).

[3] Ibid, FAQs 24 and 25.

Public Procurement of Artificial Intelligence: recent developments and remaining challenges in EU law

Now that the (more than likely) final of the EU AI Act is available, and building on the analysis of my now officially published new monograph Digital Technologies and Public Procurement (OUP 2024), I have put together my assessment of its impact for the procurement of AI under EU law and uploaded on SSRN the new paper: ‘Public Procurement of Artificial Intelligence: recent developments and remaining challenges in EU law’. The abstract is as follows:

EU Member States are increasingly experimenting with Artificial Intelligence (AI), but the acquisition and deployment of AI by the public sector is currently largely unregulated. This puts public procurement in the awkward position of a regulatory gatekeeper—a role it cannot effectively carry out. This article provides an overview of recent EU developments on the public procurement of AI. It reflects on the narrow scope of application and questionable effectiveness of tools linked to the EU AI Act, such as technical standards or model contractual clauses, and highlights broader challenges in the use of procurement law and practice to regulate the adoption and use of ‘trustworthy’ AI by the public sector. The paper stresses the need for an alternative regulatory approach.

The paper can be freely downloaded: A Sanchez-Graells, ‘Public Procurement of Artificial Intelligence: recent developments and remaining challenges in EU law’ (January 25, 2024). To be published in LTZ (Legal Tech Journal) 2/2024: https://ssrn.com/abstract=4706400.

As this will be an area of contention and continuous developments, comments most welcome!

Source: h

Implementation Challenges for the Procurement Act 2023

I have put together a consolidated overview of the primary challenges for the implementation of the Procurement Act 2023, to be included as a country report in a forthcoming issue of the European Procurement & Public Private Partnership Law Review.

It brings together developments discussed in the blog over the last year or so, including the transparency ambition, the innovation ambition, and the training offer linked to the Transforming Public Procurement project.

In case of interest, it can be downloaded from SSRN: https://ssrn.com/abstract=4692660.

It contains nothing new, though, so assiduous readers may want to skip this one!

Resh(AI)ping Good Administration: Addressing the mass effects of public sector digitalisation

Happy New Year! I hope 2024 is off to a good start for you.

My last project of last year (finished on the buzzer…) was a paper expanding the ideas first floated in the DigiCon blog post ‘Resh(AI)ping good administration: beyond systemic risks vs individual rights?’, which sparked interesting discussion at the DigiCon III conference last fall.

With a slightly different (and hopefully clearer) title, the paper is now under peer-review (and so, as always, comments welcome ahead of a final revision!).

Titled ‘Resh(AI)ping Good Administration: Addressing the mass effects of public sector digitalisation’, the paper focuses on what I think is the most distinctive feature of public sector digitalisation and the prime challenge to traditional good administration guarantees: mass effects. Its abstract is as follows:

Public sector digitalisation is transforming public governance at an accelerating rate. Digitalisation is outpacing the evolution of the legal framework. Despite several strands of international efforts to adjust good administration guarantees to new modes of digital public governance, progress has so far been slow and tepid. The increasing automation of decision-making processes puts significant pressure on traditional good administration guarantees, jeopardises individual due process rights, and risks eroding public trust. Automated decision-making has so far attracted the bulk of scholarly attention, especially in the European context. However, most analyses seek to reconcile existing duties towards individuals under the right to good administration with the challenges arising from digitalisation. Taking a critical and technology-centred doctrinal approach to developments under the law of the European Union and the Council of Europe, this paper goes beyond current debates to challenge the sufficiency of existing good administration duties. By stressing the mass effects that can derive from automated decision-making by the public sector, the paper advances the need to adapt good administration guarantees to a collective dimension through an extension and a broadening of the public sector’s good administration duties: that is, through an extended ex ante control of organisational risk-taking, and a broader ex post duty of automated redress. These legal modifications should be urgently implemented.

Sanchez-Graells, Albert, ‘Resh(AI)ping Good Administration: Addressing the mass effects of public sector digitalisation’ (December 19, 2023). Available at SSRN: https://ssrn.com/abstract=4669589.

Responsibly Buying Artificial Intelligence: A ‘Regulatory Hallucination’ -- draft paper for comment

© Matt Lowe/LinkedIn.

Following yesterday’s Current Legal Problems Lecture, I have uploaded the current full draft of the paper on SSRN. I would be very grateful for any comments in the next few weeks, as I plan to do a final revision and to submit it for peer-review in early 2024. Thanks in advance for those who take the time. As always, you can reach me at a.sanchez-graells@bristol.ac.uk.

The abstract of the paper is as follows:

Here, I focus on the UK’s approach to regulating public sector procurement and use of artificial intelligence (AI) in the context of the broader ‘pro-innovation’ approach to AI regulation. Borrowing from the description of AI ‘hallucinations’ as plausible but incorrect answers given with high confidence by AI systems, I argue that UK policymaking is trapped in a ‘regulatory hallucination.’ Despite having embraced the plausible ‘pro-innovation’ regulatory approach with high confidence, that is the incorrect answer to the challenge of regulating AI procurement and use by the public sector. I conceptualise the current strategy as one of ‘regulation by contract’ and identify two of its underpinning presumptions that make its deployment in the digital context particularly challenging. I show how neither the presumption of superiority of the public buyer over the public contractor, nor the related presumption that the public buyer is the rule-maker and the public contractor is the rule-taker, necessarily hold in this context. Public buyer superiority is undermined by the two-sided gatekeeping required to simultaneously discipline the behaviour of the public sector AI user and the tech provider. The public buyer’s rule-making role is also undermined by its reliance on industry-led standards, as well as by the tech provider’s upper hand in setting contractual benchmarks and controlling the ensuing self-assessments. In view of the ineffectiveness of regulating public sector AI use by contract, I then sketch an alternative strategy to boost the effectiveness of the goals of AI regulation and the protection of individual rights and collective interests through the creation of an independent authority.

Sanchez-Graells, Albert, ‘Responsibly Buying Artificial Intelligence: A “Regulatory Hallucination”’ (November 24, 2023). Current Legal Problems 2023-24, Available at SSRN: https://ssrn.com/abstract=4643273.

Responsibly Buying Artificial Intelligence: A Regulatory Hallucination?

I look forward to delivering the lecture ‘Responsibly Buying Artificial Intelligence: A Regulatory Hallucination?’ as part of the Current Legal Problems Lecture Series 2023-24 organised by UCL Laws. The lecture will be this Thursday 23 November 2023 at 6pm GMT and you can still register to participate (either online or in person). These are the slides I will be using, in case you want to take a sneak peek. I will post a draft version of the paper after the lecture. Comments welcome!

Public procurement (entry for an Encyclopaedia)

I was invited to provide an entry on ‘public procurement’ for the forthcoming Elgar Encyclopedia of European Law co-edited by Andrea Biondi and Oana Stefan. I must say I struggled to decide what to write about, as the entry was limited to 4,000 words and there are so many (!!) things going on in procurement. Below is my draft entry with perhaps an eclectic choice of content. Comments most welcome!

The draft entry is also available on SSRN if you prefer a pdf version: A Sanchez-Graells, ‘Public procurement’ in A Biondi and O Stefan, Elgar Encyclopedia of European Law (forthcoming) available at https://ssrn.com/abstract=4621399.

Public Procurement

I. Introduction

From up close, public procurement law can be seen as the set of mostly procedural rules controlling the way in which the public sector buys goods, services, and works from the market. Procurement would thus be a set of administrative law requirements concerned with the design and advertisement of tenders for public contracts, the decision-making process leading to the award of those contracts, and the advertisement and potential challenge of such decisions. To a more limited extent, some requirements would extend to the contract execution phase, and control in particular the modification and eventual termination of public contracts. From this narrow perspective, procurement would be primarily concerned with ensuring the integrity and probity of decision-making processes involving the management of public funds, as well as fostering the generation of value for money through effective reliance on competition for public contracts.

The importance and positive contribution of public procurement law to the adequate management of public funds may seem difficult to appreciate in ordinary times, and there are recurrent calls for a reduction of the administrative burden and bureaucracy related to procurement procedures, checks and balances. However, as the pervasive abuses of direct awards under the emergency conditions generated by the covid pandemic evidenced in virtually all jurisdictions, dispensing with those requirements, checks and balances comes with a very high price tag for taxpayers in terms of corruption, favouritism, and wastage of public funds.

Even from this relatively narrow perspective of procurement as a process-based mechanism of public governance, procurement attracts a significant amount of attention from EU legislators and from the EU Courts and is an area of crucial importance in the development of the European administrative space. As procurement regulation has been developed through successive generations of directives, and as many Member States had long traditions on the regulation of public procurement prior to the emergence of EU law on the topic, procurement offers a fertile ground for comparative public law scholarship. More recently, as EU procurement policy increasingly seeks to promote cross-border collaboration, procurement is also becoming a driver (or an irritant) for the transnational regulation of administrative processes and a living lab for experimentation and legal innovation.

From a slightly broader perspective, public procurement can be seen as a tool for the self-organisation of the State and as a primary conduit for the privatisation and outsourcing of State functions. A decision preceding procurement concerns the size and shape of the State, especially in relation to which functions and activities the State carries out in-house (including through public-public collaboration mechanisms), and which other are contracted out to the market (‘make or buy’ decisions). Procurement then controls the design and award of contracts involving the exercise of public powers, or the direct provision of public services to citizens where market agents are called upon to do so (including in the context of quasi-markets). Procurement thus heavily influences the interaction between the State’s contractual agents and citizens, and becomes a tool for the regulation of public service delivery. The more the State relies on markets for the provision of public services, the larger the potential influence (both positive and negative) of procurement mechanisms on citizens’ experience of their (indirect) interaction with the State. On this view, procurement is a tool of public governance and a conduit for public-private cooperation, as well as a regulatory mechanism for delegated public-public and public-private interactions. From this perspective, procurement is often seen as a neoliberal tool closely linked to new public management (NPM), although it should be stressed that procurement rules only activate once the decision to resort to contracting out or outsourcing has been made, as EU law does not mandate ‘going to market’.

From an even broader perspective, public procurement represents a more complex and multi-layered regulatory instrument. Given the enormous amounts of public funds channelled through public procurement, and the market-shaping effects that can follow from the exercise of such buying power, procurement regulation is often used as a lever for the promotion of policies and goals well beyond the narrower confines of procurement as a regulated administrative process. In the EU, procurement has always been an instrument of internal market regulation and sought to dismantle barriers to cross-border competition for the award of public contracts. More recently, and in line with developments in other jurisdictions, procurement has been increasingly singled out as a tool to promote environmental and sustainability goals, as well as social goals, or as a tool to foster innovation. Procurement is also increasingly identified as a tool to foster compliance with human rights along increasingly complex supply chains, or to address social inequality, such as through gender responsive procurement. In the face of the challenges posed by the mainstreaming of digital technologies, and artificial intelligence in particular, procurement is also increasingly identified as a tool of digital regulation. And, against the background of rule of law challenges within the EU, procurement conditionality has added to the fiscal control effect traditionally linked to the use of EU funds to subsidise procurement projects at Member State level. From this perspective, procurement is either an enforcement (or reinforcement) mechanism, or a self-standing regulatory tool for the pursuit of an increasingly diverse array of horizontal policies seeking to steer market activities.

Relatedly, given the importance of procurement as an economic activity, its regulation is of crucial importance in the context of industrial and trade policies. The interaction between procurement and industrial policy is not entirely straightforward, and neither is the position of procurement in the context of trade liberalisation. While there have been waves of policy efforts seeking to minimise the use of procurement for industrial policy purposes (ie the award of public contracts to national champions), in particular given the State aid implications of such uses of public contracts under EU law, and while there is a general push for the liberalisation of international trade through procurement—there are also periodic waves of protectionism where procurement is used as a tool of international economic regulation or, more broadly, geopolitics. Most recently, the EU has aggressively (re)regulated access to its procurement markets on grounds of such considerations.

It would be impossible to address all the issues that arise from the regulation of public procurement in all these (and other potential) dimensions within a single entry. Here, I will touch upon some the issues highlighted by recent developments in EU law and policy, and in relation to contemporary debates around the salient grand challenges encapsulated in the need for procurement to support the ‘twin transition’ to green and digital. I will not focus on the detail of procurement rules, which is better left to in-depth analysis (eg Arrowsmith [2014] and [2018], Steinicke and Vesterdorf [2018], or Caranta and Sanchez-Graells [2021]). There are a few common threats in the developments discussed below, especially in relation to the increasing complexity of procurement policymaking and administration, or the crucial role of expertise and capability, as well as some challenges in coordinating them in a way that generates meaningful outcomes. I will briefly return to these issues in the conclusion.

II. Procurement, Trade, and Geopolitics

A constant tension in the regulation of procurement concerns the openness of procurement markets. On the one hand, procurement can be a catalyst for trade liberalisation and there are many economic advantages stemming from increased (international) competition for public contracts—as evidenced in the context of the World Trade Organisation Government Procurement Agreement (WTO GPA) (Georgopoulos et al [2017]). In the narrower context of the EU’s internal market, public procurement openness is taken to its logical extremes and barriers to cross-border tendering are systematically dismantled through legislation, such as the most recent 2014 Public Procurement Package, and its interpretation by the Court of Justice. While there is disparity in national practice, the (complete) openness of procurement markets in the EU tends to not only benefit EU tenderers, but also those of third countries, who tend to be treated equally with EU ‘domestic’ tenderers.

On the other hand, the same (international) competition that can bring economic advantages can also put pressure on (less competitive) domestic industries or create risks of uneven playing field—especially where (foreign national champion) tenderers are propped up by their States. In some industries and in relation to some critical infrastructure, the award of oftentimes large and sensitive public contracts to foreign undertakings also generates concerns around safety and sovereignty.

A mechanism to mediate this tension is to make procurement-related trade liberalisation conditional on reciprocity, which in turn leverages multilateral instruments such as the WTO GPA. This is an area where EU law has recently generated significant developments. After protracted negotiations, EU procurement law now comprises a set of three instruments seeking to rebalance the (complete) openness of EU procurement markets.

As a starting point, under EU law, only foreign economic operators covered by an existing international agreement (such as the WTO GPA, or bilateral or multilateral trade agreements concluded with the EU that include commitments on access to public procurement) are entitled to equal treatment. However, differential treatment or outright exclusion of economic operators not covered by such equal treatment obligation tends (or has historically tended to) be rare. This can be seen to weaken the hand of the European Commission in international negotiations, as EU procurement markets are de facto almost entirely open, regardless of the much more limited legal openness resulting from those international agreements.

To nudge contracting authorities to enforce differential treatment, in 2020, the European Commission issued guidance on the participation of third country bidders and goods in EU procurement markets, stressing the several ways in which public buyers could address concerns regarding unfair competitive advantages of foreign tenderers. This should be seen as a first step towards ramping up the ‘rebalancing’ of access to EU procurement markets, though it is a soft (law) step and one that would still hinge on coordinated decision-making by a very large number of public buyers making tender-by-tender decisions.

A second and crucial step was taken in 2022 with the adoption of the EU’s International Procurement Instrument (IPI), which empowers the European Commission to carry out investigations where there are concerns about measures or practices negatively affecting the access of EU businesses, goods and services to non-EU procurement markets and, eventually, to impose (centralised) IPI measures to restrict access to EU public procurement procedures for businesses, goods and services from the non-EU countries concerned. The main effect of the IPI can be expected to be twofold. Outwardly, the IPI will lead to the European Commission having ‘a stick’ to push for reciprocity in procurement liberalisation as a complement to ‘the carrot’ used to persuade more and more countries to enter into bilateral trade deals, or for them to join the WTO GPA. Internally, the IPI will allow the Commission to mandate Member States to implement the relevant restrictions or exclusions from the EU procurement markets in relation to the jurisdictions concerned. This is expected to address the issue of de facto openness beyond existing (international) legal requirements, and therefore galvanise the ability of the Commission to control access to ‘the EU procurement market’ and thus bolster its ability to use procurement reciprocity as a tool for trade liberalisation more effectively.

A third and final crucial step came with the adoption in 2023 of the Regulation on foreign subsidies distorting the internal market, which creates a mechanism for the control of potential foreign subsidies in tenders for contracts with an estimated value above EUR 250 million, and can also result in the imposition of (centralised) measures curving access to the relevant contracts by the beneficiaries of those foreign subsidies. This comes to somehow create an international functional equivalent to the State aid control in place for domestic tenders, as well as a mechanism for the EU to enforce international anti-dumping standards within its own jurisdiction.

This trend of evolution in EU public procurement regulation evidences that public buyers are increasingly constrained by geopolitical and international economic considerations administered by the European Commission in a centralised manner (Andhov and Kania [2023]). Whether this will create friction between the Commission and Member States, perhaps in relation to particularly critical or sensitive procurement projects, remains to be seen. In any case, this line of policy and legal developments generates increased complexity in the administration of procurement processes on a day-to-day basis, and will require public buyers to develop expertise in the assessment of the relevant trade-related instruments and associated documentation, which will be a theme in common with other developments discussed below.

III. Procurement and Sustainability

It is relatively uncontroversial that public expenditure has a crucial role to play in supporting (or driving) the transition towards a more sustainable economy, and most jurisdictions explicitly consider how to harness public expenditure to decarbonise their economy and achieve net zero targets—sometimes in the broader context of efforts to achieve interlinked sustainable development goals. However, the details on the specific sustainability goals to be pursued through procurement (as compared to other means of public finances, such as subsidies or tax incentives), and on how to design and implement sustainable procurement are more contested.

Green procurement has been a primary focus of EU public procurement policy for a long time now, and it has received even further increased attention in recent years, culminating in the attribution of a prominent role for the implementation of the EU’s Green Deal. EU procurement law has been increasingly permissive and facilitative of the inclusion of environmental considerations in procurement decision-making and the European Commission has developed sets of guidance and technical documentation that are kept under permanent review and update. Overall, EU procurement law offers a diverse toolkit for public buyers to embed sustainability requirements.

However, the uptake of green procurement is much lower than would be desirable and progress is very uneven across jurisdictions and in different sectors of the economy. There is a growing realisation that facilitative or permissive approaches will not result in the quick generalisation of sustainability concerns across procurement practice required to contribute to mitigating the devastating effects of climate change in a timely fashion, or with sufficient scale. Informational and skills barriers, difficult economic assessments and competing (political) priorities necessarily slow down the uptake of sustainable procurement. In this context, it seems clear that technical complexity in the administration of procurement on a day-to-day basis, and limited technical skills in relation to sustainability assessments, are the primary obstacle in the road to mainstreaming sustainable public procurement. It is hard for public buyers to identify the relevant sustainability requirements and to embed them in their decision-making, especially where the inclusion of such requirements is bound to be checked against its suitability, proportionality, and its effect on potential competition for the relevant public contract.

To overcome this obstacle, it seems clear that a more proactive or prescriptive approach is required and that sustainability requirements must be embedded in legislation that binds public buyers—so that their role becomes one of (reinforced) compliance assessment or indirect enforcement. The question that arises, and which reopens age old discussions, is whether such legislation should solely target public procurement (Janssen and Caranta [2023]) or rather be of general application across the economy (Halonen [2021]).

This controversy evidences different understandings of the role of procurement-specific legislation and different levels of concern with the partitioning of markets. While the passing of procurement-specific legislation could be easier and politically more palatable—as it would be perceived to ultimately impose the relevant burden on economic operators seeking to gain public business (and so embed a certain element of opt-in or balanced regulatory burden against the prospect of accessing public funds), and the cost would ultimately fall on public buyers as ‘responsible (sustainable) buyers’—it would partition markets and eg potentially prevent the generation of economies of scale where public demand is not majoritarian. Moreover, such market partitioning would raise entry barriers for entities new to bidding for public contracts, as well as facilitate the emergence of anticompetitive and collusive practices in the more concentrated and partly isolated from potential competition ‘public markets’ (Sanchez-Graells [2015]) in ways that general legislation would not. More generally, advances in mandating sustainable procurement could deactivate the pressure for developments in more general sustainability mandates, as policymakers could claim to already be doing significant efforts (in the narrow setting of procurement).

A narrow sectoral approach to legislating for public procurement only would probably also over-rely on the hopes that procurement practices can become best practices and thus disseminate themselves across the economy through some understanding of mimicking, or race to the top. This relates to discussions in other areas and to the broader expectation that procurement can be a trend setter and influence industry practice and standards. However, as the discussion on digitalisation will show, the direction of influence tends to be on reverse and there are very limited mechanisms to promote or force industry adaptation to procurement standards other than in relation to direct access to procurement.

IV. Procurement and the ‘Digital Transformation’ of the State

Another area of growing consensus is that public procurement has a key role to play in the ‘digital transformation’ of the State, as the process of digitalisation is bound to rely on the acquisition of technology from market providers to a large or sole extent (depending on each jurisdiction’s make or buy decisions). This can in turn facilitate the role of procurement as a tool of digital industrial policy, especially because procurement expenditure can be a way of ensuring demand for innovation, and because public sector technology adoption can be used as a domain for experimentation with new technologies and new forms of technology-enabled governance.

The European Union has set very high expectations in its Digital Agenda 2030, and the Commission has recently stressed that achieving them would require roughly doubling the predicted level of public procurement expenditure in digital technologies, and artificial intelligence (AI) in particular. It can thus be expected that the procurement of digital technologies will quickly gain practical importance even in jurisdictions that have been lagging so far.

However, echoing some of the issues concerning sustainable procurement, in this second stream of the ‘twin transition’, the uptake of procurement of digital technologies is slowed down by the complexity of procuring unregulated immature technologies, and the (digital) skills gaps in the public sector—which are exacerbated by the absence of a toolkit of regulatory and practical resources equivalent to that of green procurement. In such a context of technological fluidity and hype, given the skills and power imbalances between technology providers and public buyers, the shortcomings of the use of public procurement as a regulatory mechanism become stark and the flaws in the logic or expectation that procurement can be an effective tool of market steering are laid bare (Sanchez-Graells [2024]).

Public buyers are expected to act as responsible AI buyers and to ensure the ‘responsible use of AI’ in the public sector. The EU AI Act will soon establish specific requirements in that regard, although solely in relation to high-risk AI uses as defined therein. Implementing the requirements of the EU AI Act—and their extension to other types of uses of digital technology or algorithms as a matter of ‘best practice’—will leverage procurement processes and, in particular, the ensuing public contracts to impose the relevant obligations on technology providers. In that connection, the European Commission has promoted the development of model contractual AI clauses that seek to regulate the technology to be procured and their future use by the relevant public sector deployer.

However, an analysis of the model clauses and broader guidance on the procurement of AI shows that public buyers will still face a very steep knowledge gap as it will be difficult to set the detail of the relevant contracts, which will tend to be highly context dependent. In other words, the model clauses are not ‘plug and play’ and implementing meaningful safeguards in the procurement and use of AI and other digital technologies will require advanced digital skills and sufficient commercial leverage—which are not to be taken as a given. Crucially, all obligations under the model clauses (and the EU AI Act itself) hinge on (self-assessment) processes controlled by the technology provider and/or refer back to technical standards or the state-of-the-art, which are driven and heavily influenced (or entirely controlled) by the technology industry. Public buyers are at a significant disadvantage not only to set, but also to monitor compliance with relevant requirements.

This shows that, in the absence of mandatory requirements and binding (general) legislation, the use of procurement for regulatory purposes has a high risk of commercial determination and regulatory tunnelling as public buyers with limited skills and capabilities struggle to impose requirements on technology providers, and where references to standards also displace regulatory decision-making. This means that public procurement can no longer be expected to ‘monitor itself’, and that new forms of institutional oversight are required to ensure that the procurement of digital technologies works in the broader public interest.

V. Conclusion

Although the issues discussed above may seem rather disparate, they share a few common threads. First, in all areas, the regulatory use of procurement generates complexity and makes the day-to-day administration of procurement processes more complex. It can be hard for a public buyer to navigate socio-political, sustainability and digitalisation concerns—and these are only some of the ‘non-strictly procurement-related’ concerns and considerations to be taken into account. Such difficulty can be compounded by limited capabilities and by gaps in the required skills. While this is particularly clear in the digital context, the issue of limited (technical) capability is also highly relevant in relation to sustainable procurement. An imbalance in skills and commercial leverage between the public buyer and technology providers undermines the logic of using procurement as a regulatory tool. Implementation issues thus require much further thought and investment than they currently receive.

Ultimately, the effectiveness of the regulatory goals underpinning the leveraging of procurement hinges on the ability of public buyers to meaningfully implement them. This raises the further question whether all goals can be achieved at the same time, especially where there can be difficult trade-offs. And there can be many of those. For example, it can well be that the offeror of the most attractive technology comes from a ‘black-listed’ jurisdiction. It can also be that the most attractive technology is also the most polluting, or one that raises significant other risks or harms from a social perspective, etc. Navigating these risks and making the (implicit) political choices may be too taxing a task for public buyers, as well as raise issues of democratic accountability more generally. Moreover, enabling public buyers to deal with these issues and to exercise judgement and discretion reopens the door to risks of eg bias, capture or corruption, as well as maladministration and error, which are some of the core concerns in the narrow approach to the regulation of procurement as an administrative procedure to being with. Those trade-offs are also pervasive and hard to assess.

It is difficult to foresee the future, but my intuition is that the trend of piling up of regulatory goals on procurement’s shoulders will need to slow down or reverse if it is meant to remain operational, and that a return to a more paired down understanding of the role of procurement will need to be enabled by the emergence of (generally applicable) legislation and external oversight mechanisms that can discharge procurement of these regulatory roles. Or, at least, that is the way I would like to see the broader regulation and policymaking around procurement to evolve.

Bibliography

Andhov, Marta and Michal Andrzej Kania, ‘Restricting Freedom of Contract – the EU Foreign Subsidies Regulation and its Consequences for Public Procurement’ (2023) Journal of Public Procurement.

Arrowsmith, Sue, The Law of Public and Utilities Procurement. Regulation in the EU and the UK, vols 1 & 2 (3rd edn, Sweet & Maxwell 2014 and 2018).

Caranta, Roberto and Albert Sanchez-Graells (eds), European Public Procurement. Commentary on Directive 2014/24/EU (Edward Elgar 2021).

Georgopoulos, Aris, Bernard Hoekman and Petros C Mavroidis (eds), The Internationalization of Government Procurement Regulation (OUP 2017).

Halonen, Kirsi-Maria, ‘Is public procurement fit for reaching sustainability goals? A law and economics approach to green public procurement’ (2021) 28(4) Maastricht Journal of European and Comparative Law 535-555.

Janssen, Willem and Roberto Caranta (eds), Mandatory Sustainability Requirements in EU Public Procurement Law. Reflections on a Paradigm Shift (Hart 2023).

Sanchez-Graells, Albert, Public Procurement and the EU Competition rules (2nd end, Hart, 2015).

Sanchez-Graells, Albert, Digital Technologies and Public Procurement. Gatekeeping and Experimentation in Digital Public Governance (OUP 2024).

Steinicke, Michael and Peter L Vesterdorf (eds), Brussels Commentary on EU Public Procurement Law (C H Beck, Hart & Nomos 2018).

Innovation procurement under the Procurement Act 2023 -- changing procurement culture on the cheap?

On 13 November 2023, the UK Government published guidance setting out its ambitions for innovation procurement under the new Procurement Act 2023 (not yet in force, of which you can read a summary here). This further expands on the ambitions underpinning the Transforming Public Procurement project that started after Brexit. The Government’s expectation is that the ‘the new legislation will allow public procurement to be done in more flexible and innovative ways’, and that this will ‘enable public sector organisations to embrace innovation more’.

The innovation procurement guidance bases its expectation that the Procurement Act will unlock more procurement of innovation and more innovative procurement on the ambition that this will be an actively supported policy by all relevant policy- and decision-makers and that there will be advocacy for the development of commercial expertise. A first hurdle here is that unless such advocacy comes with the investment of significant funds in developing skills (and this relates to both commercial and technical skills, especially where the innovation relates to digital technologies), such high-level political buy-in may not translate into any meaningful changes. The guidance itself acknowledges that the ‘overall culture, expertise and incentive structure of the public sector has led to relatively low appetite for risk and experimentation’. Therefore, that greater investment in expertise needs to be coupled with a culture change. And we know this is a process that is very difficult to push forward.

The guidance also indicates that ‘Greater transparency of procurement data will make it easier to see what approaches have been successful and encourage use of those approaches more widely across the public sector.’ This potentially points to another hurdle in unlocking this policy because generic data is not enough to support innovation procurement or the procurement of innovation. Being able to successfully replicate innovation procurement practices requires a detailed understanding of how things were done, and how they need to be adapted when replicated. However, the new transparency regime does not necessarily guarantee that such granular and detailed information will be available, especially as the practical level of transparency that will stem from the new obligations crucially hinges on the treatment of commercially sensitive information (which is exempted from disclosure in s.94 PA 2023). Unless there is clear guidance on disclosure / withholding of sensitive commercial information, it can well be that the new regime does not generate additional meaningful (publicly accessible) data to push the knowledge stock and support innovative procurement. This is an important issue that may require further discussion in a separate post.

The guidance indicates that the changes in the Procurement Act will help public buyers in three ways:

  • The new rules focus more on delivering outcomes (as opposed to ‘going through the motions’ of a rigid process). Contracting authorities will be able to design their own process, tailored to the unique circumstances of the requirement and, most importantly, those who are best placed to deliver the best solution.

  • There will be clearer rules overall and more flexibility for procurers to use their commercial skills to achieve the desired outcomes.

  • Procurers will be able to better communicate their particular problem to suppliers and work with them to come up with potential solutions. Using product demonstrations alongside written tenders will help buyers get a proper appreciation of solutions being offered by suppliers. That is particularly impactful for newer, more innovative solutions which the authority may not be familiar with.

Although the guidance document indicates that the ‘new measures include general obligations, options for preliminary market engagement, and an important new mechanism, the Competitive Flexible Procedure’, in practice, there are limited changes to what was already allowed in terms of market consultation and the general obligations— to eg publish a pipeline notice (for contracting authorities with an annual spend over £100 million), or to ‘have regard to the fact that SMEs face barriers to participation and consider whether these barriers can be removed or reduced’—are also marginal (if at all) changes from the still current regime (see regs.48 and 46 PCR 2015). Therefore, it all boils down to the new ‘innovation-friendly procurement processes’ that are enabled by the flexible (under)regulation of the competitive flexible procedure (s.20 PA 2023).

The guidance stresses that the ‘objective is that the Competitive Flexible Procedure removes some of the existing barriers to procuring new and better solutions and gives contracting authorities freedom to enable them to achieve the best fit between the specific requirement and the best the market offers.’ The example provided in the guidance provides the skeleton structure of a 3-phase procedure involving an initial ideas and feasibility phase 1, an R&D and prototype phase 2 and a final tendering leading to the award of a production/service contract (phase 3). At this level of generality, there is little to distinguish this from a competitive dialogue under the current rules (reg.30 PCR 2015). Devil will be in the detail.

Moreover, as repeatedly highlighted from the initial consultations, the under-regulation of the competitive flexible procedure will raise the information costs and risks of engaging with innovation procurement as each new approach taken by a contracting authority will require significant investment of time in its design, as well as an unavoidable risk of challenge. The incentives are not particularly geared towards facilitating risk-taking. And any more detailed guidance on ‘how to'‘ carry out an innovative competitive flexible procedure will simply replace regulation and become a de facto standard through which contracting authorities may take the same ‘going through the motions’ approach as the process detailed in teh guidance rigidifies.

The guidance acknowledges this, at least partially, when it stresses that ‘Behavioural changes will make the biggest difference’. Such behavioural changes will be supported through training, which the guidance document also describes (and there is more detail here). The training offered will consist of:

  • Knowledge drops (open to everyone): An on-demand, watchable resource up to a maximum of 45 minutes in total, providing an overview of all of the changes in legislation.

  • E-learning (for skilled practitioners within the public sector only): a learning & development self-guided course consisting of ‘10 1-hour modules and concludes with a skilled practitioner certification’.

  • Advanced course deep dives (for public sector expert practitioners only): ‘3-day, interactive, instructor-led course. It consists of virtual ‘deep dive’ webinars, which allow learners to engage with subject matter experts. This level of interaction allows a deeper insight across the full spectrum of the legislative change and support ‘hearts and minds’ change amongst the learner population (creating ‘superusers’).

  • Communities of practice (for skilled and expert practitioners only): ‘a system of collective critical inquiry and reflection into the regime changes. Supported by the central team and superusers, they will support individuals to embed what they have learned.’

As an educator and based on my experience of training expert professionals in complex procurement, I am skeptical that this amount of training can lead to meaningful changes. The 45-minute resource can hardly cover the entirety of changes in the Procurement Act, and even the 10 hour course for public buyers only will be quite limited in how far it can go. 3 days of training are also insufficient to go much further than exploring a few examples in meaningful detail. And this is relevant because that training is not only for innovation procurement, but for all types of ‘different’ procurement under the Procurement Act 2023 (ie green, social, more robustly anti-corruption, more focused on contract performance, etc). Shifting culture and practice would require a lot more than this.

It is also unclear why this (minimal) investment in public sector understanding of the procurement framework has not taken place earlier. As I already said in the consultation, all of this could have taken place years ago and a better understanding of the current regime would have led to improvements in the practice of innovative procurement in the UK.

All in all, it seems that the aspirations of more innovation procurement and more innovative procurement are pinned on a rather limited amount of training and in (largely voluntary, in addition to the day job) collaboration for super-user experienced practitioners (who will probably see their scarce skills in high demand). It is unclear to me how this will be a game changer. Especially as most of this (and in particular collaboration and voluntary knowledge exchange) could already take place. It may be that more structure and coordination will bring better outcomes, but this would require adequate and sufficient resourcing.

Whether there will be more innovation procurement then depends on whether more money will be put into procurement structures and support. From where I stand, this is by no means a given. I guess we’ll have to wait and see.

Some thoughts on the US' Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI

On 30 October 2023, President Biden adopted the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the ‘AI Executive Order’, see also its Factsheet). The use of AI by the US Federal Government is an important focus of the AI Executive Order. It will be subject to a new governance regime detailed in the Draft Policy on the use of AI in the Federal Government (the ‘Draft AI in Government Policy’, see also its Factsheet), which is open for comment until 5 December 2023. Here, I reflect on these documents from the perspective of AI procurement as a major plank of this governance reform.

Procurement in the AI Executive Order

Section 2 of the AI Executive Order formulates eight guiding principles and priorities in advancing and governing the development and use of AI. Section 2(g) refers to AI risk management, and states that

It is important to manage the risks from the Federal Government’s own use of AI and increase its internal capacity to regulate, govern, and support responsible use of AI to deliver better results for Americans. These efforts start with people, our Nation’s greatest asset. My Administration will take steps to attract, retain, and develop public service-oriented AI professionals, including from underserved communities, across disciplines — including technology, policy, managerial, procurement, regulatory, ethical, governance, and legal fields — and ease AI professionals’ path into the Federal Government to help harness and govern AI. The Federal Government will work to ensure that all members of its workforce receive adequate training to understand the benefits, risks, and limitations of AI for their job functions, and to modernize Federal Government information technology infrastructure, remove bureaucratic obstacles, and ensure that safe and rights-respecting AI is adopted, deployed, and used.

Section 10 then establishes specific measures to advance Federal Government use of AI. Section 10.1(b) details a set of governance reforms to be implemented in view of the Director of the Office of Management and Budget (OMB)’s guidance to strengthen the effective and appropriate use of AI, advance AI innovation, and manage risks from AI in the Federal Government. Section 10.1(b) includes the following (emphases added):

The Director of OMB’s guidance shall specify, to the extent appropriate and consistent with applicable law:

(i) the requirement to designate at each agency within 60 days of the issuance of the guidance a Chief Artificial Intelligence Officer who shall hold primary responsibility in their agency, in coordination with other responsible officials, for coordinating their agency’s use of AI, promoting AI innovation in their agency, managing risks from their agency’s use of AI …;

(ii) the Chief Artificial Intelligence Officers’ roles, responsibilities, seniority, position, and reporting structures;

(iii) for [covered] agencies […], the creation of internal Artificial Intelligence Governance Boards, or other appropriate mechanisms, at each agency within 60 days of the issuance of the guidance to coordinate and govern AI issues through relevant senior leaders from across the agency;

(iv) required minimum risk-management practices for Government uses of AI that impact people’s rights or safety, including, where appropriate, the following practices derived from OSTP’s Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework: conducting public consultation; assessing data quality; assessing and mitigating disparate impacts and algorithmic discrimination; providing notice of the use of AI; continuously monitoring and evaluating deployed AI; and granting human consideration and remedies for adverse decisions made using AI;

(v) specific Federal Government uses of AI that are presumed by default to impact rights or safety;

(vi) recommendations to agencies to reduce barriers to the responsible use of AI, including barriers related to information technology infrastructure, data, workforce, budgetary restrictions, and cybersecurity processes;

(vii) requirements that [covered] agencies […] develop AI strategies and pursue high-impact AI use cases;

(viii) in consultation with the Secretary of Commerce, the Secretary of Homeland Security, and the heads of other appropriate agencies as determined by the Director of OMB, recommendations to agencies regarding:

(A) external testing for AI, including AI red-teaming for generative AI, to be developed in coordination with the Cybersecurity and Infrastructure Security Agency;

(B) testing and safeguards against discriminatory, misleading, inflammatory, unsafe, or deceptive outputs, as well as against producing child sexual abuse material and against producing non-consensual intimate imagery of real individuals (including intimate digital depictions of the body or body parts of an identifiable individual), for generative AI;

(C) reasonable steps to watermark or otherwise label output from generative AI;

(D) application of the mandatory minimum risk-management practices defined under subsection 10.1(b)(iv) of this section to procured AI;

(E) independent evaluation of vendors’ claims concerning both the effectiveness and risk mitigation of their AI offerings;

(F) documentation and oversight of procured AI;

(G) maximizing the value to agencies when relying on contractors to use and enrich Federal Government data for the purposes of AI development and operation;

(H) provision of incentives for the continuous improvement of procured AI; and

(I) training on AI in accordance with the principles set out in this order and in other references related to AI listed herein; and

(ix) requirements for public reporting on compliance with this guidance.

Section 10.1(b) of the AI Executive Order establishes two sets or types of requirements.

First, there are internal governance requirements and these revolve around the appointment of Chief Artificial Intelligence Officers (CAIOs), AI Governance Boards, their roles, and support structures. This set of requirements seeks to strengthen the ability of Federal Agencies to understand AI and to provide effective safeguards in its governmental use. The crucial set of substantive protections from this internal perspective derives from the required minimum risk-management practices for Government uses of AI, which is directly placed under the responsibility of the relevant CAIO.

Second, there are external (or relational) governance requirements that revolve around the agency’s ability to control and challenge tech providers. This involves the transfer (back to back) of minimum risk-management practices to AI contractors, but also includes commercial considerations. The tone of the Executive Order indicates that this set of requirements is meant to neutralise risks of commercial capture and commercial determination by imposing oversight and external verification. From an AI procurement governance perspective, the requirements in Section 10.1(b)(viii) are particularly relevant. As some of those requirements will need further development with a view to their operationalisation, Section 10.1(d)(ii) of the AI Executive Order requires the Director of OMB to develop an initial means to ensure that agency contracts for the acquisition of AI systems and services align with its Section 10.1(b) guidance.

Procurement in the Draft AI in Government Policy

The guidance required by Section 10.1(b) of the AI Executive Order has been formulated in the Draft AI in Government Policy, which offers more detail on the relevant governance mechanisms and the requirements for AI procurement. Section 5 on managing risks from the use of AI is particularly relevant from an AI procurement perspective. While Section 5(d) refers explicitly to managing risks in AI procurement, given that the primary substantive obligations will arise from the need to comply with the required minimum risk-management practices for Government uses of AI, this specific guidance needs to be read in the broader context of AI risk-management within Section 5 of the Draft AI in Government Policy.

Scope

The Draft AI in Government Policy relies on a tiered approach to AI risk by imposing specific obligations in relation to safety-impacting and rights-impacting AI only. This is an important element of the policy because these two categories are defined (in Section 6) and in principle will cover pre-established lists of AI use, based on a set of presumptions (Section 5(b)(i) and (ii)). However, CAIOs will be able to waive the application of minimum requirements for specific AI uses where, ‘based upon a system-specific risk assessment, [it is shown] that fulfilling the requirement would increase risks to safety or rights overall or would create an unacceptable impediment to critical agency operations‘ (Section 5(c)(iii)). Therefore, these are not closed lists and the specific scope of coverage of the policy will vary with such determinations. There are also some exclusions from minimum requirements where the AI is used for narrow purposes (Section 5(c)(i))—notably the ‘Evaluation of a potential vendor, commercial capability, or freely available AI capability that is not otherwise used in agency operations, solely for the purpose of making a procurement or acquisition decision’; AI evaluation in the context of regulatory enforcement, law enforcement or national security action; or research and development.

This scope of the policy may be under-inclusive, or generate risks of under-inclusiveness at the boundary, in two respects. First, the way AI is defined for the purposes of the Draft AI in Government Policy, excludes ‘robotic process automation or other systems whose behavior is defined only by human-defined rules or that learn solely by repeating an observed practice exactly as it was conducted’ (Section 6). This could be under-inclusive to the extent that the minimum risk-management practices for Government uses of AI create requirements that are not otherwise applicable to Government use of (non-AI) algorithms. There is a commonality of risks (eg discrimination, data governance risks) that would be better managed if there was a joined up approach. Moreover, developing minimum practices in relation to those means of automation would serve to develop institutional capability that could then support the adoption of AI as defined in the policy. Second, the variability in coverage stemming from consideration of ‘unacceptable impediments to critical agency operations‘ opens the door to potentially problematic waivers. While these are subject to disclosure and notification to OMB, it is not entirely clear on what grounds OMB could challenge those waivers. This is thus an area where the guidance may require further development.

extensions and waivers

In relation to covered safety-impacting or rights-impacting AI (as above), Section 5(a)(i) establishes the important principle that US Federal Government agencies have until 1 August 2024 to implement the minimum practices in Section 5(c), ‘or else stop using any AI that is not compliant with the minimum practices’. This type of sunset clause concerning the currently implicit authorisation for the use of AI is a potentially powerful mechanism. However, the Draft also establishes that such obligation to discontinue non-compliant AI use must be ‘consistent with the details and caveats in that section [5(c)]’, which includes the possibility, until 1 August 2024, for agencies to

request from OMB an extension of limited and defined duration for a particular use of AI that cannot feasibly meet the minimum requirements in this section by that date. The request must be accompanied by a detailed justification for why the agency cannot achieve compliance for the use case in question and what practices the agency has in place to mitigate the risks from noncompliance, as well as a plan for how the agency will come to implement the full set of required minimum practices from this section.

Again, the guidance does not detail on what grounds OMB would grant those extensions or how long they would be for. There is a clear interaction between the extension and waiver mechanism. For example, an agency that saw its request for an extension declined could try to waive that particular AI use—or agencies could simply try to waive AI uses rather than applying for extensions, as the requirements for a waiver seem to be rather different (and potentially less demanding) than those applicable to a waiver. In that regard, it seems that waiver determinations are ‘all or nothing’, whereas the system could be more flexible (and protective) if waiver decisions not only needed to explain why meeting the minimum requirements would generate the heightened overall risks or pose such ‘unacceptable impediments to critical agency operations‘, but also had to meet the lower burden of mitigation currently expected in extension applications, concerning detailed justification for what practices the agency has in place to mitigate the risks from noncompliance where they can be partly mitigated. In other words, it would be preferable to have a more continuous spectrum of mitigation measures in the context of waivers as well.

general minimum practices

Both in relation to safety- and rights-impact AI uses, the Draft AI in Government Policy would require agencies to engage in risk management both before and while using AI.

Preventative measures include:

  • completing an AI Impact Assessment documenting the intended purpose of the AI and its expected benefit, the potential risks of using AI, and and analysis of the quality and appropriateness of the relevant data;

  • testing the AI for performance in a real-world context—that is, testing under conditions that ‘mirror as closely as possible the conditions in which the AI will be deployed’; and

  • independently evaluate the AI, with the particularly important requirement that ‘The independent reviewing authority must not have been directly involved in the system’s development.’ In my view, it would also be important for the independent reviewing authority not to be involved in the future use of the AI, as its (future) operational interest could also be a source of bias in the testing process and the analysis of its results.

In-use measures include:

  • conducting ongoing monitoring and establish thresholds for periodic human review, with a focus on monitoring ‘degradation to the AI’s functionality and to detect changes in the AI’s impact on rights or safety’—‘human review, including renewed testing for performance of the AI in a real-world context, must be conducted at least annually, and after significant modifications to the AI or to the conditions or context in which the AI is used’;

  • mitigating emerging risks to rights and safety—crucially, ‘Where the AI’s risks to rights or safety exceed an acceptable level and where mitigation is not practicable, agencies must stop using the affected AI as soon as is practicable’. In that regard, the draft indicates that ‘Agencies are responsible for determining how to safely decommission AI that was already in use at the time of this memorandum’s release without significant disruptions to essential government functions’, but it would seem that this is also a process that would benefit from close oversight by OMB as it would otherwise jeopardise the effectiveness of the extension and waiver mechanisms discussed above—in which case additional detail in the guidance would be required;

  • ensuring adequate human training and assessment;

  • providing appropriate human consideration as part of decisions that pose a high risk to rights or safety; and

  • providing public notice and plain-language documentation through the AI use case inventory—however, this is subject a large number of caveats (notice must be ‘consistent with applicable law and governmentwide guidance, including those concerning protection of privacy and of sensitive law enforcement, national security, and other protected information’) and more detailed guidance on how to assess these issues would be welcome (if it exists, a cross-reference in the draft policy would be helpful).

additional minimum practices for rights-impacting ai

In relation to rights-affecting AI only, the Draft AI in Government Policy would require agencies to take additional measures.

Preventative measures include:

  • take steps to ensure that the AI will advance equity, dignity, and fairness—including proactively identifying and removing factors contributing to algorithmic discrimination or bias; assessing and mitigating disparate impacts; and using representative data; and

  • consult and incorporate feedback from affected groups.

In-use measures include:

  • conducting ongoing monitoring and mitigation for AI-enabled discrimination;

  • notifying negatively affected individuals—this is an area where the draft guidance is rather woolly, as it also includes a set of complex caveats, as individual notice that ‘AI meaningfully influences the outcome of decisions specifically concerning them, such as the denial of benefits’ must only be given ‘[w]here practicable and consistent with applicable law and governmentwide guidance’. Moreover, the draft only indicates that ‘Agencies are also strongly encouraged to provide explanations for such decisions and actions’, but not required to. In my view, this tackles two of the most important implications for individuals in Government use of AI: the possibility to understand why decisions are made (reason giving duties) and the burden of challenging automated decisions, which is increased if there is a lack of transparency on the automation. Therefore, on this point, the guidance seems too tepid—especially bearing in mind that this requirement only applies to ‘AI whose output serves as a basis for decision or action that has a legal, material, or similarly significant effect on an individual’s’ civil rights, civil liberties, or privacy; equal opportunities; or access to critical resources or services. In these cases, it seems clear that notice and explainability requirements need to go further.

  • maintaining human consideration and remedy processes—including ‘potential remedy to the use of the AI by a fallback and escalation system in the event that an impacted individual would like to appeal or contest the AI’s negative impacts on them. In developing appropriate remedies, agencies should follow OMB guidance on calculating administrative burden and the remedy process should not place unnecessary burden on the impacted individual. When law or governmentwide guidance precludes disclosure of the use of AI or an opportunity for an individual appeal, agencies must create appropriate mechanisms for human oversight of rights-impacting AI’. This is another crucial area concerning rights not to be subjected to fully-automated decision-making where there is no meaningful remedy. This is also an area of the guidance that requires more detail, especially as to what is the adequate balance of burdens where eg the agency can automate the undoing of negative effects on individuals identified as a result of challenges by other individuals or in the context of the broader monitoring of the functioning and effects of the rights-impacting AI. In my view, this would be an opportunity to mandate automation of remediation in a meaningful way.

  • maintaining options to opt-out where practicable.

procurement related practices

In addition to the need for agencies to be able to meet the above requirements in relation to procured AI—which will in itself create the need to cascade some of the requirements down to contractors, and which will be the object of future guidance on how to ensure that AI contracts align with the requirements—the Draft AI in Government Policy also requires that agencies procuring AI manage risks by:

  • aligning to National Values and Law by ensuring ‘that procured AI exhibits due respect for our Nation’s values, is consistent with the Constitution, and complies with all other applicable laws, regulations, and policies, including those addressing privacy, confidentiality, copyright, human and civil rights, and civil liberties’;

  • taking ‘steps to ensure transparency and adequate performance for their procured AI, including by: obtaining adequate documentation of procured AI, such as through the use of model, data, and system cards; regularly evaluating AI-performance claims made by Federal contractors, including in the particular environment where the agency expects to deploy the capability; and considering contracting provisions that incentivize the continuous improvement of procured AI’;

  • taking ‘appropriate steps to ensure that Federal AI procurement practices promote opportunities for competition among contractors and do not improperly entrench incumbents. Such steps may include promoting interoperability and ensuring that vendors do not inappropriately favor their own products at the expense of competitors’ offering’;

  • maximizing the value of data for AI; and

  • responsibly procuring Generative AI.

These high level requirements are well targeted and compliance with them would go a long way to fostering ‘responsible AI procurement’ through adequate risk mitigation in ways that still allow the procurement mechanism to harness market forces to generate value for money.

However, operationalising these requirements will be complex and the further OMB guidance should be rather detailed and practical.

Final thoughts

In my view, the AI Executive Order and the Draft AI in Government Policy lay the foundations for a significant strengthening of the governance of AI procurement with a view to embedding safeguards in public sector AI use. A crucially important characteristic in the design of these governance mechanisms is that it imposes significant duties on the agencies seeking to procure and use the AI, and it explicitly seeks to address risks of commercial capture and commercial determination. Another crucially important characteristic is that, at least in principle, use of AI is made conditional on compliance with a rather comprehensive set of preventative and in-use risk mitigation measures. The general aspects of this governance approach thus offer a very valuable blueprint for other jurisdictions considering how to boost AI procurement governance.

However, as always, the devil is in the details. One of the crucial risks in this approach to AI governance concerns a lack of independence of the entities making the relevant assessments. In the Draft AI in Government Policy, there are some risks of under-inclusion and/or excessive waivers of compliance with the relevant requirements (both explicit and implicit, through protracted processes of decommissioning of non-compliant AI), as well as a risk that ‘practical considerations’ will push compliance with the risk mitigation requirements well past the (ambitious) 1 August 2024 deadline through long or rolling extensions.

To mitigate for this, the guidance should be much clearer on the role of OMB in extension, waiver and decommissioning decisions, as well as in relation to the specific criteria and limits that should form part of those decisions. Only by ensuring adequate OMB intervention can a system of governance that still does not entirely (organisationally) separate procurement, use and oversight decisions reach the levels of independent verification required not only to neutralise commercial determination, but also operational dependency and the ‘policy irresistibility’ of digital technologies.

Thoughts on the AI Safety Summit from a public sector procurement & use of AI perspective

The UK Government hosted an AI Safety Summit on 1-2 November 2023. A summary of the targeted discussions in a set of 8 roundtables has been published for Day 1, as well as a set of Chair’s statements for Day 2, including considerations around safety testing, the state of the science, and a general summary of discussions. There is also, of course, the (flagship?) Bletchley Declaration, and an introduction to the announced AI Safety Institute (UK AISI).

In this post, I collect some of my thoughts on these outputs of the AI Safety Summit from the perspective of public sector procurement and use of AI.

What was said at the AI safety Summit?

Although the summit was narrowly targeted to discussion of ‘frontier AI’ as particularly advanced AI systems, some of the discussions seem to have involved issues also applicable to less advanced (ie currently in existence) AI systems, and even to non-AI algorithms used by the public sector. As the general summary reflects, ‘There was also substantive discussion of the impact of AI upon wider societal issues, and suggestions that such risks may themselves pose an urgent threat to democracy, human rights, and equality. Participants expressed a range of views as to which risks should be prioritised, noting that addressing frontier risks is not mutually exclusive from addressing existing AI risks and harms.’ Crucially, ‘participants across both days noted a range of current AI risks and harmful impacts, and reiterated the need for them to be tackled with the same energy, cross-disciplinary expertise, and urgency as risks at the frontier.’ Hopefully, then, some of the rather far-fetched discussions of future existential risks can be conducive to taking action on current harms and risks arising from the procurement and use of less advanced systems.

There seemed to be some recognition of the need for more State intervention through regulation, for more regulatory control of standard-setting, and for more attention to be paid to testing and evaluation in the procurement context. For example, the summary of Day 1 discussions indicates that participants agreed that

  • ‘We should invest in basic research, including in governments’ own systems. Public procurement is an opportunity to put into practice how we will evaluate and use technology.’ (Roundtable 4)

  • ‘Company policies are just the baseline and don’t replace the need for governments to set standards and regulate. In particular, standardised benchmarks will be required from trusted external third parties such as the recently announced UK and US AI Safety Institutes.’ (Roundtable 5)

In Day 2, in the context of safety testing, participants agreed that

  • Governments have a responsibility for the overall framework for AI in their countries, including in relation to standard setting. Governments recognise their increasing role for seeing that external evaluations are undertaken for frontier AI models developed within their countries in accordance with their locally applicable legal frameworks, working in collaboration with other governments with aligned interests and relevant capabilities as appropriate, and taking into account, where possible, any established international standards.

  • Governments plan, depending on their circumstances, to invest in public sector capability for testing and other safety research, including advancing the science of evaluating frontier AI models, and to work in partnership with the private sector and other relevant sectors, and other governments as appropriate to this end.

  • Governments will plan to collaborate with one another and promote consistent approaches in this effort, and to share the outcomes of these evaluations, where sharing can be done safely, securely and appropriately, with other countries where the frontier AI model will be deployed.

This could be a basis on which to build an international consensus on the need for more robust and decisive regulation of AI development and testing, as well as a consensus of the sets of considerations and constraints that should be applicable to the procurement and use of AI by the public sector in a way that is compliant with individual (human) rights and social interests. The general summary reflects that ‘Participants welcomed the exchange of ideas and evidence on current and upcoming initiatives, including individual countries’ efforts to utilise AI in public service delivery and elsewhere to improve human wellbeing. They also affirmed the need for the benefits of AI to be made widely available’.

However, some statements seem at first sight contradictory or problematic. While the excerpt above stresses that ‘Governments have a responsibility for the overall framework for AI in their countries, including in relation to standard setting’ (emphasis added), the general summary also stresses that ‘The UK and others recognised the importance of a global digital standards ecosystem which is open, transparent, multi-stakeholder and consensus-based and many standards bodies were noted, including the International Standards Organisation (ISO), International Electrotechnical Commission (IEC), Institute of Electrical and Electronics Engineers (IEEE) and relevant study groups of the International Telecommunication Union (ITU).’ Quite how State responsibility for standard setting fits with industry-led standard setting by such organisations is not only difficult to fathom, but also one of the potentially most problematic issues due to the risk of regulatory tunnelling that delegation of standard setting without a verification or certification mechanism entails.

Moreover, there seemed to be insufficient agreement around crucial issues, which are summarised as ‘a set of more ambitious policies to be returned to in future sessions’, including:

‘1. Multiple participants suggested that existing voluntary commitments would need to be put on a legal or regulatory footing in due course. There was agreement about the need to set common international standards for safety, which should be scientifically measurable.

2. It was suggested that there might be certain circumstances in which governments should apply the principle that models must be proven to be safe before they are deployed, with a presumption that they are otherwise dangerous. This principle could be applied to the current generation of models, or applied when certain capability thresholds were met. This would create certain ‘gates’ that a model had to pass through before it could be deployed.

3. It was suggested that governments should have a role in testing models not just pre- and post-deployment, but earlier in the lifecycle of the model, including early in training runs. There was a discussion about the ability of governments and companies to develop new tools to forecast the capabilities of models before they are trained.

4. The approach to safety should also consider the propensity for accidents and mistakes; governments could set standards relating to how often the machine could be allowed to fail or surprise, measured in an observable and reproducible way.

5. There was a discussion about the need for safety testing not just in the development of models, but in their deployment, since some risks would be contextual. For example, any AI used in critical infrastructure, or equivalent use cases, should have an infallible off-switch.

8. Finally, the participants also discussed the question of equity, and the need to make sure that the broadest spectrum was able to benefit from AI and was shielded from its harms.’

All of these are crucial considerations in relation to the regulation of AI development, (procurement) and use. A lack of consensus around these issues already indicates that there was a generic agreement that some regulation is necessary, but much more limited agreement on what regulation is necessary. This is clearly reflected in what was actually agreed at the summit.

What was agreed at the AI Safety Summit?

Despite all the discussions, little was actually agreed at the AI Safety Summit. The Blethcley Declaration includes a lengthy (but rather uncontroversial?) description of the potential benefits and actual risks of (frontier) AI, some rather generic agreement that ‘something needs to be done’ (eg welcoming ‘the recognition that the protection of human rights, transparency and explainability, fairness, accountability, regulation, safety, appropriate human oversight, ethics, bias mitigation, privacy and data protection needs to be addressed’) and very limited and unspecific commitments.

Indeed, signatories only ‘committed’ to a joint agenda, comprising:

  • ‘identifying AI safety risks of shared concern, building a shared scientific and evidence-based understanding of these risks, and sustaining that understanding as capabilities continue to increase, in the context of a wider global approach to understanding the impact of AI in our societies.

  • building respective risk-based policies across our countries to ensure safety in light of such risks, collaborating as appropriate while recognising our approaches may differ based on national circumstances and applicable legal frameworks. This includes, alongside increased transparency by private actors developing frontier AI capabilities, appropriate evaluation metrics, tools for safety testing, and developing relevant public sector capability and scientific research’ (emphases added).

This does not amount to much that would not happen anyway and, given that one of the UK Government’s objectives for the Summit was to create mechanisms for global collaboration (‘a forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks’), this agreement for each jurisdiction to do things as they see fit in accordance to their own circumstances and collaborate ‘as appropriate’ in view of those seems like a very poor ‘win’.

In reality, there seems to be little coming out of the Summit other than a plan to continue the conversations in 2024. Given what had been said in one of the roundtables (num 5) in relation to the need to put in place adequate safeguards: ‘this work is urgent, and must be put in place in months, not years’; it looks like the ‘to be continued’ approach won’t do or, at least, cannot be claimed to have made much of a difference.

What did the UK Government promise in the AI Summit?

A more specific development announced with the occasion of the Summit (and overshadowed by the earlier US announcement) is that the UK will create the AI Safety Institute (UK AISI), a ‘state-backed organisation focused on advanced AI safety for the public interest. Its mission is to minimise surprise to the UK and humanity from rapid and unexpected advances in AI. It will work towards this by developing the sociotechnical infrastructure needed to understand the risks of advanced AI and enable its governance.’

Crucially, ‘The Institute will focus on the most advanced current AI capabilities and any future developments, aiming to ensure that the UK and the world are not caught off guard by progress at the frontier of AI in a field that is highly uncertain. It will consider open-source systems as well as those deployed with various forms of access controls. Both AI safety and security are in scope’ (emphasis added). This seems to carry forward the extremely narrow focus on ‘frontier AI’ and catastrophic risks that augured a failure of the Summit. It is also in clear contrast with the much more sensible and repeated assertions/consensus in that other types of AI cause very significant risks and that there is ‘a range of current AI risks and harmful impacts, and reiterated the need for them to be tackled with the same energy, cross-disciplinary expertise, and urgency as risks at the frontier.’

Also crucially, UK AISI ‘is not a regulator and will not determine government regulation. It will collaborate with existing organisations within government, academia, civil society, and the private sector to avoid duplication, ensuring that activity is both informing and complementing the UK’s regulatory approach to AI as set out in the AI Regulation white paper’.

According to initial plans, UK AISI ‘will initially perform 3 core functions:

  • Develop and conduct evaluations on advanced AI systems, aiming to characterise safety-relevant capabilities, understand the safety and security of systems, and assess their societal impacts

  • Drive foundational AI safety research, including through launching a range of exploratory research projects and convening external researchers

  • Facilitate information exchange, including by establishing – on a voluntary basis and subject to existing privacy and data regulation – clear information-sharing channels between the Institute and other national and international actors, such as policymakers, international partners, private companies, academia, civil society, and the broader public’

It is also stated that ‘We see a key role for government in providing external evaluations independent of commercial pressures and supporting greater standardisation and promotion of best practice in evaluation more broadly.’ However, the extent to which UK AISI will be able to do that will hinge on issues that are not currently clear (or publicly disclosed), such as the membership of UK AISI or its institutional set up (as ‘state-backed organisation’ does not say much about this).

On that very point, it is somewhat problematic that the UK AISI ‘is an evolution of the UK’s Frontier AI Taskforce. The Frontier AI Taskforce was announced by the Prime Minister and Technology Secretary in April 2023’ (ahem, as ‘Foundation Model Taskforce’—so this is the second rebranding of the same initiative in half a year). As is problematic that UK AISI ‘will continue the Taskforce’s safety research and evaluations. The other core parts of the Taskforce’s mission will remain in [the Department for Science, Innovation and Technology] as policy functions: identifying new uses for AI in the public sector; and strengthening the UK’s capabilities in AI.’ I find the retention of analysis pertaining to public sector AI use within government problematic and a clear indication of the UK’s Government unwillingness to put meaningful mechanisms in place to monitor the process of public sector digitalisation. UK AISI very much sounds like a research institute with a focus on a very narrow set of AI systems and with a remit that will hardly translate into relevant policymaking in areas in dire need of regulation. Finally, it is also very problematic that funding is not locked: ‘The Institute will be backed with a continuation of the Taskforce’s 2024 to 2025 funding as an annual amount for the rest of this decade, subject to it demonstrating the continued requirement for that level of public funds.’ In reality, this means that the Institute’s continued existence will depend on the Government’s satisfaction with its work and the direction of travel of its activities and outputs. This is not at all conducive to independence, in my view.

So, all in all, there is very little new in the announcement of the creation of the UK AISI and, while there is a (theoretical) possibility for the Institute to make a positive contribution to regulating AI procurement and use (in the public sector), this seems extremely remote and potentially undermined by the Institute’s institutional set up. This is probably in stark contrast with the US approach the UK is trying to mimic (though more on the US approach in a future entry).

European Commission wants to see more AI procurement. Ok, but priorities need reordering

The European Commission recently published its 2023 State of the Digital Decade report. One of its key takeaways is that the Commission recommends Member States to step up innovation procurement investments in digital sector.

The Commission has identified that ‘While the roll-out of digital public services is progressing steadily, investment in public procurement of innovative digital solutions (e.g. based on AI or big data) is insufficient and would need to increase substantially from EUR 188 billon to EUR 295 billon in order to reach full speed adoption of innovative digital solutions in public services’ (para 4.2, original emphasis).

The Commission has thus recommended that ‘Member States should step up investment and regulatory measures to develop and make available secure, sovereign and interoperable digital solutions for online public and government services’; and that ‘Member States should develop action plans in support of innovation procurement and step up efforts to increase public procurement investments in developing, testing and deploying innovative digital solutions’.

Tucked away in a different part of the report (which, frankly, has a rather odd structure), the Commission also recommends that ‘Member States should foster the availability of legal and technical support to procure and implement trustworthy and sovereign AI solutions across sectors.’

To my mind, the priorities for investment of public money need to be further clarified. Without a significant investment in an ambitious plan to quickly expand the public sector’s digital skills and capabilities, there can be no hope that increased procurement expenditure in digital technologies will bring adequate public sector digitalisation or foster the public interest more broadly.

Without a sophisticated public buyer that can adequately cut through the process of technological innovation, there is no hope that ‘throwing money at the problem’ will bring meaningful change. In my view, the focus and priority should be on upskilling the public sector before anything else—including ahead of the also recommended mobilisation of ‘public policies, including innovative procurement to foster the scaling up of start-ups, to facilitate the creation of spinoffs from universities and research centres, and to monitor progress in this area’ (para 3.2.3). Perhaps a substantial fraction of the 100+ billion EUR the Commission expects Member States to put into public sector digitalisation could go to building up the required capability… too much to ask?