Final EU model contractual AI Clauses available -- some thoughts on regulatory tunnelling

Source: https://tinyurl.com/mrx9sbz8.

The European Commission has published the final version of the EU model contractual AI clauses to pilot in procurements of AI, which have been ‘developed for pilot use in the procurement of AI with the aim to establish responsibilities for trustworthy, transparent, and accountable development of AI technologies between the supplier and the public organisation.’

The model AI clauses have been developed by reference to the (future) obligations arising from the EU AI Act currently under advanced stages of negotiation. This regulatory technique simply seeks to allow public buyers to ensure compliance with the EU AI Act by cascading the relevant obligations and requirements down to tech providers (largely on a back to back basis). By the same regulatory logic, this technique will be a conveyor belt for the shortcomings of the EU AI Act, which will be embedded in public contracts using the clauses. It is thus important to understand the shortcomings inherent to this approach and to the model AI clauses, before assuming that their use will actually ensure the ‘trustworthy, transparent, and accountable development [and deployment] of AI technologies’. Much more is needed than mere reliance on the model AI clauses.

Two sets of model AI clauses

The EU AI Act will not be applicable to all types of AI use. Remarkably, most requirements will be limited to ‘high-risk AI uses’ as defined in its Article 6. This immediately translates into the generation of two sets of model AI clauses: one for ‘high-risk’ AI procurement, which embeds the requirements expected to arise from the EU AI Act once finalised, and another ‘light version’ for non-high-risk AI procurement, which would support the voluntary extension of some of those requirements to the procurement of AI for other uses, or even to the use of other types of algorithmic solutions not meeting the regulatory definition of AI.

A first observation is that the controversy surrounding the definition of ‘high-risk’ in the EU AI Act immediately carries over to the model AI clauses and to the choice of ‘demanding’ vs light version. While the original proposal of the EU AI Act contained a numerus clausus of high-risk uses (which was already arguably too limited, see here), the trilogue negotiations could well end suppressing a pre-defined classification and leaving it to AI providers to (self)assess whether the use would be ‘high-risk’.

This has been heavily criticised in a recent open letter. If the final version of the EU AI Act ended up embedding such a self-assessment of what uses are bound to be high-risk, there would be clear risks of gaming of the self-assessment to avoid compliance with the heightened obligations under the Act (and it is unclear that the system of oversight and potential fines foreseen in the EU AI Act would suffice to prevent this). This would directly translate into a risk of gaming (or strategic opportunism) in the choice between ‘demanding’ vs light version of the model AI clauses by public buyers as well.

As things stand today, it seems that most procurement of AI will be subject to the light version of the model AI clauses, where contracting authorities will need to decide which clauses to use and which standards to refer to. Importantly, the light version does not include default options in relation to quality management, conformity assessments, corrective actions, inscription in an AI register, or compliance and audit (some of which are also optional under the ‘demanding’ model). This means that, unless public buyers are familiar with both sets of model AI clauses, taking the light version as a starting point already generates a risk of under-inclusiveness and under-regulation.

Limitations in the model AI clauses

The model AI clauses come with some additional ‘caveat emptor’ warnings. As the Commission has stressed in the press release accompanying the model AI clauses:

The EU model contractual AI clauses contain provisions specific to AI Systems and on matters covered by the proposed AI Act, thus excluding other obligations or requirements that may arise under relevant applicable legislation such as the General Data Protection Regulation. Furthermore, these EU model contractual AI clauses do not comprise a full contractual arrangement. They need to be customized to each specific contractual context. For example, EU model contractual AI clauses do not contain any conditions concerning intellectual property, acceptance, payment, delivery times, applicable law or liability. The EU model contractual AI clauses are drafted in such a way that they can be attached as a schedule to an agreement in which such matters have already been laid down.

This is an important warning, as the sole remit of the model AI clauses links back to the EU AI Act and, in the case of the light version, only partially.

the link between model AI clauses and standards

However, the most significant shortcoming of the model AI clauses is that, by design, they do not include any substantive or material constraints or requirements on the development and use of AI. All substantive obligations are meant to be incorporated by reference to the (harmonised) standards to be developed under the EU AI Act, other sets of standards or, more generally, the state-of-the-art. Plainly, there is no definition or requirement in the model AI clauses that establishes the meaning of eg trustworthiness—and there is thus no baseline safety net ensuring it. Similarly, most requirements are offloaded to (yet to emerge) standards or the technical and organisational measures devised by the parties. For example,

  • Obligations on record-keeping (Art 5 high-risk model) refer to capabilities conforming ‘to state of the art and, if available, recognised standards or common specifications. <Optional: add, if available, a specific standard>’.

  • Measures to ensure transparency (Art 6 high-risk model) are highly qualified: ‘The Supplier ensures that the AI System has been and shall be designed and developed in such a way that the operation of the AI System is sufficiently transparent to enable the Public Organisation to reasonably understand the system’s functioning’. Moreover, the detail of the technical and organisational measures that need to be implemented to reach those (qualified) goals is left entirely undefined in the relevant Annex (E) — thus leaving the option open for referral to emerging transparency standards.

  • Measures on human oversight (Art 7 high-risk model) are also highly qualified: ‘The Supplier ensures that the AI System has been and shall be designed and developed in such a way, including with appropriate human-machine interface tools, that it can be effectively overseen by natural persons as proportionate to the risks associated with the system’. Although there is some useful description of what ‘human oversight’ should mean as a minimum (Art 7(2)), the detail of the technical and organisational measures that need to be implemented to reach those (qualified) goals is also left entirely undefined in the relevant Annex (F) — thus leaving the option open for referral to emerging ‘human on the loop’ standards.

  • Measures on accuracy, robustness and cybersecurity (Art 8 high-risk model) follow the same pattern. Annexes G and H on levels of accuracy and on measures to ensure an appropriate level of robustness, safety and cybersecurity are also blank. While there can be mandatory obligations stemming from other sources of EU law (eg the NIS 2 Directive), only partial aspects of cybersecurity will be covered, and not in all cases.

  • Measures on the ‘explainability’ of the AI (Art 13 high-risk model) fall short of imposing an absolute requirement of intelligibility of the AI outputs, as the focus is on a technical explanation, rather than a contextual or intuitive explanation.

All in all, the model AI clauses are primarily an empty regulatory shell. Operationalising them will require reliance on (harmonised) standards—eg on transparency, human oversight, accuracy, explainability … — or, most likely (at least until such standards are in place) significant additional concretisation by the public buyer seeking to rely on the model AI clauses.

For the reasons identified in my previous research, I think this is likely to generate regulatory tunnelling and to give the upper hand to AI providers in making sure they can comfortably live with requirements in any specific contract. The regulatory tunnelling stems from the fact that all meaningful requirements and constraints are offloaded to the (harmonised) standards to be developed. And it is no secret that the governance of the standardisation process falls well short of ensuring that the resulting standards will embed high levels of protection of the desired regulatory goals — some of which are very hard to define in ways that can be translated into procurement or contractual requirements anyway.

Moreover, public buyers with limited capabilities will struggle to use the model AI clauses in ways that meaningfully ‘establish responsibilities for trustworthy, transparent, and accountable development [and deployment] of AI technologies’—other than in relation to those standards. My intuition is that the content of the all too relevant schedules in the model AI clauses will either simply refer to emerging standards or where there is no standard or the standard is for whatever reason considered inadequate, be left for negotiation with tech providers, or be part of the evaluation (eg tenderers will be required to detail how they propose to regulate eg accuracy). Whichever way this goes, this puts the public buyer in a position of rule-taker.

Only very few, well-resourced, highly skilled public buyers (if any) would be able to meaningfully flesh out a comprehensive set of requirements in the relevant annexes to give the model AI clauses sufficient bite. And they would not benefit much from the model AI clauses as it is unlikely that in their sophistication they would not have already come up with similar solutions. Therefore, at best, the contribution of the model AI clauses is rather marginal and, at worse, it comes with a significant risk of regulatory complacency.

final thoughts

indeed, given all of this, it is clear that the model IA clauses generate a risk if (non-sophisticated/most) public buyers think that relying on them will deal with the many and complex challenges inherent to the acquisition of AI. And an even bigger risk if we collectively think that the existence of such model AI clauses is all the regulation of AI procurement we need. This is not a criticism of the clauses in themselves, but rather of the technique of ‘regulation by contract’ that underlies it and of the broader approach followed by the European Commission and other regulators (including the UK’s)!

I have demonstrated how this is a flawed regulatory strategy in my forthcoming book Digital Technologies and Public Procurement. Gatekeeping and Experimentation in Digital Public Governance (OUP) and in many working papers resulting from the project with the same title. In my view, we need to do a lot more if we want to make sure that the public sector only procures and uses trustworthy AI technologies. We need to create a regulatory system that assigns to an independent authority both the permissioning of the procurement of AI and the certification of the standards underpinning such procurement. In the absence of such regulatory developments, we cannot meaningfully claim that the procurement of AI will be in line with the values and goals to be expected from ‘responsible’ AI use.

I will further explore these issues in a public lecture on 23 November 2023 at University College London. All welcome: Hybrid | Responsibly Buying Artificial Intelligence: A Regulatory Hallucination? | UCL Faculty of Laws - UCL – University College London.

Source: https://public-buyers-community.ec.europa....