More model contractual AI clauses -- some comments on the SCL AI Clauses

Following the launch of the final version of the model contractual AI clauses sponsored by the European Commission earlier this month, the topic of how to develop and how to use contractual model clauses for AI procurement is getting hotter. As part of its AI Action Plan, New York City has announced that it is starting work to develop its own model clauses for AI procurement (to be completed in 2025). We can expect to see a proliferation of model AI clauses as more ‘AI legislation’ imposes constraints on contractual freedom and compliance obligations, and as different model clauses are revised to (hopefully) capture the learning from current experimentation in AI procurement.

Although not (closely) focused on procurement, a new set of interesting AI contractual clauses has been released by the Society for Computers & Law (SCL) AI Group (thanks to Gisele Waters for bringing them to my attention on LinkedIn!). In this post, I reflect on some aspects of the SCL AI clauses and try to answer Gisele’s question/challenge (below).

SCL AI Clauses

The SCL AI clauses have a clear commercial orientation and are meant as a starting point for supplier-customer negotiations, which is reflected on the fact that the proposed clauses contain two options: (1) a ‘pro-supplier’ drafting based on off-the-shelf provision, and (2) a ‘pro-customer’ drafting based on a bespoke arrangement. Following that commercial logic, most of the SCL AI clauses focus on an allocation of obligations (and thus costs and liability) between the parties (eg in relation to compliance with legal requirements).

The clauses include a few substantive requirements implicit in the allocation of the respective obligations (eg on data or third party licences) but mostly refer to detailed schedules of which there is no default proposal, or to industry standards (and thus have this limitation in common with eg the EU’s model AI clauses). The SCL AI clauses do contain some drafting notes that would help identify issues needing specific regulation in the relevant schedules, although this guidance necessarily remains rather abstract or generic.

This pro-supplier/pro-customer orientation prompted Gisele’s question/challenge, which is whether ‘there is EVER an opportunity for government (customer-buyer) to be better able to negotiate the final language with clauses like these in order to weigh the trade offs between interests?’, especially bearing in mind that the outcome of the negotiations could be strongly pro-supplier, strongly pro-customer, or balanced (and something in between those). I think that answering this question requires exploring what pro-supplier or pro-customer may mean in this specific context.

From a substantive regulation perspective, the SCL AI clauses include a few interesting elements, such as an obligation to establish a circuit-breaker capable of stopping the AI (aka an ‘off button’) and a roll-back obligation (to an earlier, non-faulty version of the AI solution) where the AI is malfunctioning or this is necessary to comply with applicable law. However, most of the substantive obligations are established by reference to ‘Good Industry Practice’, which requires some further unpacking.

SCL AI Clauses and ‘Good Industry Practice’

Most of crucial proposed clauses refer to the benchmark of ‘Good Industry Practice’ as a primary qualifier for the relevant obligations. The proposed clause on explainability is a good example. The SCL AI clause (C1.15) reads as follows:

C1.15 The Supplier will ensure that the AI System is designed, developed and tested in a way which ensures that its operation is sufficiently transparent to enable the Customer to understand and use the AI System appropriately. In particular, the Supplier will produce to the Customer, on request, information which allows the Customer to understand:

C1.15.1 the logic behind an individual output from the AI System; and

C1.15.2 in respect of the AI System or any specific part thereof, which features contributed most to the output of the AI System, in each case, in accordance with Good Industry Practice.

A first observation is that the SCL AI clauses seem to presume that off-the-shelf AI solutions would not be (necessarily) explainable, as they include no clause under the ‘pro-supplier’ version.

Second, the ‘pro-customer’ version both limits the types of explanation that would be contractually owed (to a model-level or global explanation under C1.15.2 and a limited decision-level or local explanation under C1.15.1 — which leaves out eg a counterfactual explanation, as well as not setting any specific requirements on how the explanation needs to be produced, eg is a ‘post hoc’ explanation acceptable and if so how should it be produced?) and qualifies it in two important ways: (1) the overall requirement is that the AI system’s operation should be ‘sufficiently transparent’, with ‘sufficient’ creating a lot of potential issues here; and, (2) the reference to ‘Good Industry Practice’ [more on this below].

The issue of transparency is similarly problematic in its more general treatment under another specific clause (C4.6), which also only has a ‘pro-customer’ version:

C4.6 The Supplier warrants that, so far as is possible [to achieve the intended use of the AI System / comply with the Specification], the AI System is transparent and interpretable [such that its output can be traced back to the input data] .

The qualifier ‘so far as is possible’ is again potentially quite problematic here, as are the open-ended references to transparency and interpretability of the system (with a potential conflict between interpretability for the purposes of this clause and explainability under C1.15).

What I find interesting about this clause is that the drafting notes explain that:

… the purpose of this provision is to ensure that the Supplier has not used an overly-complex algorithm if this is unnecessary for the intended use of the AI System or to comply with its Specification. That said, effectiveness and accuracy are often trade-offs for transparency in AI models.

From this perspective, I think the clause should be retitled and entirely redrafted to make explicit that the purpose is to establish a principle of ‘AI minimisation’ in the sense of the supplier guaranteeing that the AI system is the least complex that can provide the desired functionality — which, of course, has the tricky issue of trade-off and the establishment of the desired functionality in itself to work around. (and which in a procurement context would have been dealt with pre-contract, eg in the context of technical specifications and/or tender evaluation). Interestingly, this issue is another one where reference could be made to ‘Good Industry Practice’ if one accepted that it should be best practice to always use the most explainable/interpretable and most simple model available for a given task.

As mentioned, reference to ‘Good Industry Practice’ is used extensively in the SCL AI clauses, including crucial issues such as: explainability (above), user manual/user training, preventing unlawful discrimination, security (which is inclusive of cyber secturity and some aspects of data protection/privacy), or quality standards. The drafting notes are clear that

… while parties often refer to ‘best practice’ or ‘good industry practice’, these standards can be difficult to apply in developing industry. Accordingly a clear Specification is required, …

Which is the reason why the SCL AI clauses foresee that ‘Good Industry Practice’ will be a defined contract term, whereby the parties will specify the relevant requirements and obligations. And here lies the catch.

Defining ‘Good Industry Practice’?

In the SCL AI clauses, all references to ‘Good Industry Practice’ are used as qualifiers in the pro-customer version of the clauses. It is possible that the same term would be of relevance to establishing whether the supplier had discharged its reasonable duties/best efforts under the pro-supplier version (where the term would be defined but not explicitly used). In both cases, the need to define ‘Good Industry Practice’ is the Achilles heel of the model clauses, as well as a potential Trojan horse for customers seeking a seemingly pro-customer contractual design,

The fact is that the extent of the substantive obligations arising from the contract will entirely depend on how the concept of ‘Good Industry Practice’ is defined and specified. This leaves even seemingly strongly ‘pro-customer’ contracts exposed to weak substantive protections. The biggest challenge for buyers/procurers of AI will be that (1) it will be hard to know how to define the term and what standards to refer to, and (2) it will be difficult to monitor compliance with the standards, especially where those establish eg mechanisms of self-asessment by the tech supplier as the primary or sole quality control mechanims.

So, my answer to Gisele’s question/challenge would be that the SCL AI clauses, much like the EU’s, do not (and cannot?) go far enough in ensuring that the contract for the procurement/purchase of AI embeds adequate substantive requirements. The model clauses are helpful in understanding who needs to do what when, and thus who shoulders the relevant cost and risk. But they do not address the all-important question of how it needs to be done. And that is the crucial issue that will determine whether the contract (and the AI solution) really is in the public buyer’s interest and, ultimately in the public interest.

In a context where tech providers (almost always) have the upper hand in negotiations, this foundational weakness is all important, as suppliers could well ‘agree to pro-customer drafting’ and then immediately deactivate it through the more challenging and technical definition (and implementation) of ‘Good Industry Practices’.

That is why I think we need to cover this regulatory tunnelling risk and this foundational shortcoming of ‘AI regulation by contract’ by creating clear and binding requirements on the how (ie the ‘good (industry) practice’ or technical standards). The emergence of model AI contract clauses to me makes it clear that the most efficient contract design is such that it needs to refer to external benchmarks. Establishing adequarte protections and an adequate balance of risks and benefits (from a social perspective) hinges on this. The contract can then deal with an apportionment of the burdens, obligations, costs and risks stemming from the already set requirements.

So I would suggest that the focus needs to be squarely on developing the regulatory architecture that will lead us to the development of such mandatory requirements and standards for the procurement and use of AI by the public sector — which may then become adequate good industry practice for strictly commercial or private contracts. My proposal in that regard is sketched out here.

Recording of webinar on 'Digitalization and AI decision-making in administrative law proceedings'

The Centre for Global Law and Innovation of the University of Bristol Law School and the Faculty of Law at Universidade Católica Portuguesa co-organised an online workshop to discuss emerging issues in digitalization and AI decision-making in administrative law proceedings. I had the great pleasure of chairing it and I think quite a few important issues for further discussion and research were identified. The speakers kindly agreed to share a recording of the session (available here), of which details follow:

Digitalization and AI decision-making in administrative law proceedings

This is a hot area of legal and policy development that has seen an acceleration in the context of the covid-19 pandemic. Emerging research finds points of friction in the simple transposition of administrative law and existing procedures to the AI context, as well as challenges and shortcomings in the judicial review of decisions supported (or delegated) to an AI.

While more and more attention is paid to the use of AI by the public sector, key regulatory proposals such as the European Commission’s Proposal for an Artificial Intelligence Act would largely leave this area to (self)regulation via codes of practice, with the exception of public assistance benefits and services. Self-regulation is also largely the approach taken by the UK in its Guide to using artificial intelligence in the public sector, and the UK courts seem reluctant to engage with the technology underpinning automated decision-making. It is thus arguable that a regulatory gap is increasingly visible and that new solutions and regulatory approaches are required.

The panellists in this workshop covered a range of topics concerning transparency, data protection, automation of decision-making, and judicial review. The panel included (in order of participation):

• Dr Marta Vaz Canavarro Portocarrero de Carvalho, Assistant Professor at the Faculty of Law of Universidade Católica Portuguesa, specialising in administrative law, and member of the Centro de Arbitragem Administrativa (Portuguese Administrative Law Arbitration Centre).

• Dr Filipa Calvão, President of the Comissão Nacional de Proteção de Dados (Portuguese Data Protection Authority) since 2012, and Associate Professor at the Faculty of Law of Universidade Católica Portuguesa.

• Dr Pedro Cerqueira Gomes, Assistant Professor at Universidade Católica Portuguesa and Lawyer at Cerqueira Gomes & Associados, RL, specialising in administrative law and public procurement, and author of EU Public Procurement and Innovation - the innovation partnership procedure and harmonization challenges (Edward Elgar 2021).

• Mr Kit Fotheringham, Teaching Associate and postgraduate research student at the University of Bristol Law School. His doctoral thesis is on administrative law, specifically relating to the use of algorithms, machine learning and other artificial intelligence technologies by public bodies in automated decision-making procedures.

Where does the proposed EU AI Act place procurement?

Thinking about some of the issues raised in the earlier post ‘Can the robot procure for you?,’ I have now taken a close look at the European Commission’s Proposal for an Artificial Intelligence Act (AIA) to see how it approaches the use of AI in procurement procedures. It may (not) come as a surprise that the AI Act takes an extremely light-touch approach to the regulation of AI uses in procurement and simply subjects them to (yet to be developed) voluntary codes of conduct. I will detail my analysis of why this is the case in this post, as well as some reasons why I do not find it satisfactory.

Before getting to the details, it is worth stressing that this is reflective of a broader feature of the AIA: its heavy private sector orientation. When it comes to AI uses by the public sector, other than prohibiting some massive surveillance by the State (both for law enforcement and to generate a system of social scoring) and classifying as high-risk the most obvious AI uses by the law enforcement and judicial authorities (all of which are important, of course), the AIA remains silent on the use of AI in most administrative procedures, with the only exception of those concerning social benefits.

This approach could be generally justified by the limits to EU competence and, in particular, those derived from the principle of administrative self-organisation of the Member States. However, given the very broad approach taken by the Commission on the interpretation and use of Article 114 TFEU (which is the legal basis for the AIA, more below), this is not entirely consistent. It could rather be that the specific uses of AI by the public sector covered in the proposal reflect the increasingly well-known problematic uses of (biased) AI solutions in narrow aspects of public sector activity, rather than a broader reflection on the (still unknown, or still unimplemented) uses that could be problematic.

While the AIA is ‘future-proofed’ by including criteria for the inclusion of further use cases in its ‘high-risk’ category (which determines the bulk of compliance obligations), it is difficult to see how those criteria are suited to a significant expansion of the regulatory constraints to AI uses by the public sector, including in procurement. Therefore, as a broader point, I submit that the proposed AIA needs some revision to make it more suited to the potential deployment of AI by the public sector. To reflect on that, I am co-organising a webinar on ’Digitalization and AI decision-making in administrative law proceedings’, which will take place on 15 Nov 2021, 1pm UK (save the date, registration and more details here). All welcome.

Background on the AIA

Summarising the AIA is both difficult and has already been done (see eg this quick explainer of the Centre for Data Innovation, and for an accessible overview of the rationale and regulatory architecture of the AIA, this master class by Prof Christiane Wendehorst). So, I will just highlight here a few issues linked to the analysis of procurement’s position within its regulatory framework.

The AIA seeks to establish a proportionate approach to the regulation of AI deployment and use. While its primary concern is with the consolidation of the EU Single Digital Market and the avoidance of regulatory barriers to the circulation of AI solutions, its preamble also points to the need to ensure the effectiveness of EU values and, crucially, the fundamental rights in the Charter of Fundamental Rights of the EU.

Importantly for the purposes of our discussion, recital (28) AIA stresses that ‘The extent of the adverse impact caused by the AI system on the fundamental rights protected by the Charter is of particular relevance when classifying an AI system as high-risk. Those rights include ... right to an effective remedy and to a fair trial [Art 47 Charter] … [and] right to good administration {Art 41 Charter]’.

The AIA seeks to create such a proportionate approach to the regulation of AI by establishing four categories of AI uses: prohibited, high-risk, limited risk requiring transparency measures, and minimal risk. The two categories that carry regulatory constraints or compliance obligations are those concerning high-risk (Arts 8-15 AIA), and limited risk requiring transparency measures (Art 52 AIA, which also applies to some high-risk AI). Minimal risk AI uses are left unregulated, although the AIA (Art 69) seeks to promote the development of codes of conduct intended to foster voluntary compliance with the requirements applicable to high-risk AI systems.

Procurement within the AIA

Procurement AI practices could not be classified as prohibited uses (Art 5 AIA), except in the difficult to imagine circumstances in which they deployed subliminal techniques. It is also difficult to see how they could fall under the regime applicable to uses requiring special transparency (Art 52) because it only applies to AI systems intended to interact with natural persons, which must be ‘designed and developed in such a way that natural persons are informed that they are interacting with an AI system, unless this is obvious from the circumstances and the context of use.’ It would not be difficult for public buyers using external-facing AI solutions (eg chatbots seeking to guide tenderers through their e-submissions) to make it clear that the tenderers are interacting with an AI solution. And, even if not, the transparency obligations are rather minimal.

So, the crux of the issue rests on whether procurement-related AI uses could be classified as high-risk. This is regulated in Art 6 AIA, which cross-refers to Annex III AIA. The Annex contains a numerus clausus of high-risk AI uses, which is however susceptible of amendment under the conditions specified in Art 7 AIA. Art 6/Annex III do not contain any procurement-related AI uses. The only type of AI use linked to administrative procedures concerns ‘AI systems intended to be used by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for public assistance benefits and services, as well as to grant, reduce, revoke, or reclaim such benefits and services’ (Annex III(5)(a) AIA).

Clearly, then, procurement-related AI uses are currently left to the default category of those with minimal risk and, thus, subjected only to voluntary self-regulation via codes of conduct.

Could this change in the future?

Art 7 AIA establishes the following two cumulative criteria: (a) the AI systems are intended to be used in any of the areas listed in points 1 to 8 of Annex III; and (b) the AI systems pose a risk of harm to the health and safety, or a risk of adverse impact on fundamental rights, that is, in respect of its severity and probability of occurrence, equivalent to or greater than the risk of harm or of adverse impact posed by the high-risk AI systems already referred to in Annex III.

The first hurdle in getting procurement-related AI uses included in Annex III in the future is formal and concerns the interpretation of the categories listed therein. There are only two potential options: nesting them under uses related to ‘Access to and enjoyment of essential private services and public services and benefits’, or uses related to ‘Administration of justice and democratic processes’. It could (theoretically) be possible to squeeze them in one of them (perhaps the latter easier than the former), but this is by no means straightforward and, given the existing AI uses in each of the two categories, I would personally be disinclined to engage in such broad interpretation.

Even if that hurdle was cleared, the second hurdle is also challenging. Art 7(2) AIA establishes the criteria to assess that an AI use poses a sufficient ‘risk of adverse impact on fundamental rights’. Of those criteria, there are three that in my view would make it very difficult to classify procurement-related AI uses as high-risk. Those criteria require the European Commission to consider:

(c) the extent to which the use of an AI system has already caused … adverse impact on the fundamental rights or has given rise to significant concerns in relation to the materialisation of such … adverse impact, as demonstrated by reports or documented allegations submitted to national competent authorities;

(d) the potential extent of such harm or such adverse impact, in particular in terms of its intensity and its ability to affect a plurality of persons;

(e) the extent to which potentially harmed or adversely impacted persons are dependent on the outcome produced with an AI system, in particular because for practical or legal reasons it is not reasonably possible to opt-out from that outcome;

(g) the extent to which the outcome produced with an AI system is easily reversible …;

Meeting these criteria would require for the relevant AI systems to basically be making independent or fully automated decisions (eg on award of contract, or exclusion of tenderers), so that their decisions would be seen to affect the effectiveness of Art 41 and 47 Charter rights; as well as a (practical) understanding that those decisions cannot be easily reversed. Otherwise, the regulatory threshold is so high that most likely procurement-related AI uses (screening, recommender systems, support to human decision-making (eg automated evaluation of tenders), etc) are unlikely to be considered to pose a sufficient ‘risk of adverse impact on fundamental rights’.

Could Member States go further?

As mentioned above, one of the potential explanations for the almost absolute silence on the use of AI in administrative procedures in the AIA could be that the Commission considers that this aspect of AI regulation belongs to each of the Member States. If that was true, then Member States could further than the code of conduct self-regulatory approach resulting from the AIA regulatory architecture. An easy approach would be to eg legally mandate compliance with the AIA obligations for high-risk AI systems.

However, given the internal market justification of the AIA, to be honest, I have my doubts that such a regulatory intervention would withstand challenges on the basis of general EU internal market law.

The thrust of the AIA competential justification (under Art 114 TFEU, see point 2.1 of the Explanatory memorandum) is that

The primary objective of this proposal is to ensure the proper functioning of the internal market by setting harmonised rules in particular on the development, placing on the Union market and the use of products and services making use of AI technologies or provided as stand-alone AI systems. Some Member States are already considering national rules to ensure that AI is safe and is developed and used in compliance with fundamental rights obligations. This will likely lead to two main problems: i) a fragmentation of the internal market on essential elements regarding in particular the requirements for the AI products and services, their marketing, their use, the liability and the supervision by public authorities, and ii) the substantial diminishment of legal certainty for both providers and users of AI systems on how existing and new rules will apply to those systems in the Union.

All of those issues would arise if each Member State adopted its own rules constraining the use of AI for administrative procedures not covered by the AIA (either related to procurement or not), so the challenge to that decentralised approach on grounds of internal market law by eg providers of procurement-related AI solutions capable of deployment in all Member States but burdened with uneven regulatory requirements seems quite straightforward (if controversial), especially given the high level of homogeneity in public procurement regulation resulting from the 2014 Public Procurement Package. Not to mention the possibility of challenging those domestic obligation on grounds that they go further than the AIA in breach of Art 16 Charter (freedom to conduct a business), even if this could face some issues resulting from the interpretation of Art 51 thereof.

Repositioning procurement (and other aspects of administrative law) in the AIA

In my view, there is a case to be made for the repositioning of procurement-related AI uses within the AIA, and its logic can apply to other areas of administrative law/activity with similar market effects.

The key issue is that the development of AI solutions to support decision-making in the public sector not only concerns the rights of those directly involved or affected by those decisions, but also society at large. In the case of procurement, eg the development of biased procurement evaluation or procurement recommender systems can have negative social effects via its effects on the market (eg on value for money, to mention the most obvious) that are difficult to identify in single tender procurement decisions.

Moreover, it seems that the public administration is well-placed to comply with the requirements of the AIA for high-risk AI systems as a matter of routine procedure, and the arguments on the need to take a proportionate approach to the regulation of AI so as not to stifle innovation lose steam and barely have any punch when it comes to imposing them on the public sector user. Further, to a large extent, the AIA requirements seem to me mostly aligned with the requirements for running a proper (and challenge proof) eProcurement system, and they would also facilitate compliance with duties of good administration when specific decisions are challenged.

Therefore, on balance, I see no good reason not to expand the list in Annex III AIA to include the use of AI systems in all administrative procedures, and in particular in public procurement and in other regulatory sectors where ex post interventions to correct market distortions resulting from biased AI implementations can simply be practically impossible. I submit that this should be done before its adoption.