Public Procurement of Artificial Intelligence: recent developments and remaining challenges in EU law

Now that the (more than likely) final of the EU AI Act is available, and building on the analysis of my now officially published new monograph Digital Technologies and Public Procurement (OUP 2024), I have put together my assessment of its impact for the procurement of AI under EU law and uploaded on SSRN the new paper: ‘Public Procurement of Artificial Intelligence: recent developments and remaining challenges in EU law’. The abstract is as follows:

EU Member States are increasingly experimenting with Artificial Intelligence (AI), but the acquisition and deployment of AI by the public sector is currently largely unregulated. This puts public procurement in the awkward position of a regulatory gatekeeper—a role it cannot effectively carry out. This article provides an overview of recent EU developments on the public procurement of AI. It reflects on the narrow scope of application and questionable effectiveness of tools linked to the EU AI Act, such as technical standards or model contractual clauses, and highlights broader challenges in the use of procurement law and practice to regulate the adoption and use of ‘trustworthy’ AI by the public sector. The paper stresses the need for an alternative regulatory approach.

The paper can be freely downloaded: A Sanchez-Graells, ‘Public Procurement of Artificial Intelligence: recent developments and remaining challenges in EU law’ (January 25, 2024). To be published in LTZ (Legal Tech Journal) 2/2024: https://ssrn.com/abstract=4706400.

As this will be an area of contention and continuous developments, comments most welcome!

Source: h

More model contractual AI clauses -- some comments on the SCL AI Clauses

Following the launch of the final version of the model contractual AI clauses sponsored by the European Commission earlier this month, the topic of how to develop and how to use contractual model clauses for AI procurement is getting hotter. As part of its AI Action Plan, New York City has announced that it is starting work to develop its own model clauses for AI procurement (to be completed in 2025). We can expect to see a proliferation of model AI clauses as more ‘AI legislation’ imposes constraints on contractual freedom and compliance obligations, and as different model clauses are revised to (hopefully) capture the learning from current experimentation in AI procurement.

Although not (closely) focused on procurement, a new set of interesting AI contractual clauses has been released by the Society for Computers & Law (SCL) AI Group (thanks to Gisele Waters for bringing them to my attention on LinkedIn!). In this post, I reflect on some aspects of the SCL AI clauses and try to answer Gisele’s question/challenge (below).

SCL AI Clauses

The SCL AI clauses have a clear commercial orientation and are meant as a starting point for supplier-customer negotiations, which is reflected on the fact that the proposed clauses contain two options: (1) a ‘pro-supplier’ drafting based on off-the-shelf provision, and (2) a ‘pro-customer’ drafting based on a bespoke arrangement. Following that commercial logic, most of the SCL AI clauses focus on an allocation of obligations (and thus costs and liability) between the parties (eg in relation to compliance with legal requirements).

The clauses include a few substantive requirements implicit in the allocation of the respective obligations (eg on data or third party licences) but mostly refer to detailed schedules of which there is no default proposal, or to industry standards (and thus have this limitation in common with eg the EU’s model AI clauses). The SCL AI clauses do contain some drafting notes that would help identify issues needing specific regulation in the relevant schedules, although this guidance necessarily remains rather abstract or generic.

This pro-supplier/pro-customer orientation prompted Gisele’s question/challenge, which is whether ‘there is EVER an opportunity for government (customer-buyer) to be better able to negotiate the final language with clauses like these in order to weigh the trade offs between interests?’, especially bearing in mind that the outcome of the negotiations could be strongly pro-supplier, strongly pro-customer, or balanced (and something in between those). I think that answering this question requires exploring what pro-supplier or pro-customer may mean in this specific context.

From a substantive regulation perspective, the SCL AI clauses include a few interesting elements, such as an obligation to establish a circuit-breaker capable of stopping the AI (aka an ‘off button’) and a roll-back obligation (to an earlier, non-faulty version of the AI solution) where the AI is malfunctioning or this is necessary to comply with applicable law. However, most of the substantive obligations are established by reference to ‘Good Industry Practice’, which requires some further unpacking.

SCL AI Clauses and ‘Good Industry Practice’

Most of crucial proposed clauses refer to the benchmark of ‘Good Industry Practice’ as a primary qualifier for the relevant obligations. The proposed clause on explainability is a good example. The SCL AI clause (C1.15) reads as follows:

C1.15 The Supplier will ensure that the AI System is designed, developed and tested in a way which ensures that its operation is sufficiently transparent to enable the Customer to understand and use the AI System appropriately. In particular, the Supplier will produce to the Customer, on request, information which allows the Customer to understand:

C1.15.1 the logic behind an individual output from the AI System; and

C1.15.2 in respect of the AI System or any specific part thereof, which features contributed most to the output of the AI System, in each case, in accordance with Good Industry Practice.

A first observation is that the SCL AI clauses seem to presume that off-the-shelf AI solutions would not be (necessarily) explainable, as they include no clause under the ‘pro-supplier’ version.

Second, the ‘pro-customer’ version both limits the types of explanation that would be contractually owed (to a model-level or global explanation under C1.15.2 and a limited decision-level or local explanation under C1.15.1 — which leaves out eg a counterfactual explanation, as well as not setting any specific requirements on how the explanation needs to be produced, eg is a ‘post hoc’ explanation acceptable and if so how should it be produced?) and qualifies it in two important ways: (1) the overall requirement is that the AI system’s operation should be ‘sufficiently transparent’, with ‘sufficient’ creating a lot of potential issues here; and, (2) the reference to ‘Good Industry Practice’ [more on this below].

The issue of transparency is similarly problematic in its more general treatment under another specific clause (C4.6), which also only has a ‘pro-customer’ version:

C4.6 The Supplier warrants that, so far as is possible [to achieve the intended use of the AI System / comply with the Specification], the AI System is transparent and interpretable [such that its output can be traced back to the input data] .

The qualifier ‘so far as is possible’ is again potentially quite problematic here, as are the open-ended references to transparency and interpretability of the system (with a potential conflict between interpretability for the purposes of this clause and explainability under C1.15).

What I find interesting about this clause is that the drafting notes explain that:

… the purpose of this provision is to ensure that the Supplier has not used an overly-complex algorithm if this is unnecessary for the intended use of the AI System or to comply with its Specification. That said, effectiveness and accuracy are often trade-offs for transparency in AI models.

From this perspective, I think the clause should be retitled and entirely redrafted to make explicit that the purpose is to establish a principle of ‘AI minimisation’ in the sense of the supplier guaranteeing that the AI system is the least complex that can provide the desired functionality — which, of course, has the tricky issue of trade-off and the establishment of the desired functionality in itself to work around. (and which in a procurement context would have been dealt with pre-contract, eg in the context of technical specifications and/or tender evaluation). Interestingly, this issue is another one where reference could be made to ‘Good Industry Practice’ if one accepted that it should be best practice to always use the most explainable/interpretable and most simple model available for a given task.

As mentioned, reference to ‘Good Industry Practice’ is used extensively in the SCL AI clauses, including crucial issues such as: explainability (above), user manual/user training, preventing unlawful discrimination, security (which is inclusive of cyber secturity and some aspects of data protection/privacy), or quality standards. The drafting notes are clear that

… while parties often refer to ‘best practice’ or ‘good industry practice’, these standards can be difficult to apply in developing industry. Accordingly a clear Specification is required, …

Which is the reason why the SCL AI clauses foresee that ‘Good Industry Practice’ will be a defined contract term, whereby the parties will specify the relevant requirements and obligations. And here lies the catch.

Defining ‘Good Industry Practice’?

In the SCL AI clauses, all references to ‘Good Industry Practice’ are used as qualifiers in the pro-customer version of the clauses. It is possible that the same term would be of relevance to establishing whether the supplier had discharged its reasonable duties/best efforts under the pro-supplier version (where the term would be defined but not explicitly used). In both cases, the need to define ‘Good Industry Practice’ is the Achilles heel of the model clauses, as well as a potential Trojan horse for customers seeking a seemingly pro-customer contractual design,

The fact is that the extent of the substantive obligations arising from the contract will entirely depend on how the concept of ‘Good Industry Practice’ is defined and specified. This leaves even seemingly strongly ‘pro-customer’ contracts exposed to weak substantive protections. The biggest challenge for buyers/procurers of AI will be that (1) it will be hard to know how to define the term and what standards to refer to, and (2) it will be difficult to monitor compliance with the standards, especially where those establish eg mechanisms of self-asessment by the tech supplier as the primary or sole quality control mechanims.

So, my answer to Gisele’s question/challenge would be that the SCL AI clauses, much like the EU’s, do not (and cannot?) go far enough in ensuring that the contract for the procurement/purchase of AI embeds adequate substantive requirements. The model clauses are helpful in understanding who needs to do what when, and thus who shoulders the relevant cost and risk. But they do not address the all-important question of how it needs to be done. And that is the crucial issue that will determine whether the contract (and the AI solution) really is in the public buyer’s interest and, ultimately in the public interest.

In a context where tech providers (almost always) have the upper hand in negotiations, this foundational weakness is all important, as suppliers could well ‘agree to pro-customer drafting’ and then immediately deactivate it through the more challenging and technical definition (and implementation) of ‘Good Industry Practices’.

That is why I think we need to cover this regulatory tunnelling risk and this foundational shortcoming of ‘AI regulation by contract’ by creating clear and binding requirements on the how (ie the ‘good (industry) practice’ or technical standards). The emergence of model AI contract clauses to me makes it clear that the most efficient contract design is such that it needs to refer to external benchmarks. Establishing adequarte protections and an adequate balance of risks and benefits (from a social perspective) hinges on this. The contract can then deal with an apportionment of the burdens, obligations, costs and risks stemming from the already set requirements.

So I would suggest that the focus needs to be squarely on developing the regulatory architecture that will lead us to the development of such mandatory requirements and standards for the procurement and use of AI by the public sector — which may then become adequate good industry practice for strictly commercial or private contracts. My proposal in that regard is sketched out here.