G7 Guiding Principles and Code of Conduct on Artificial Intelligence -- some comments from a UK perspective

On 30 October 2023, G7 leaders published the Hiroshima Process International Guiding Principles for Advanced AI system (the G7 AI Principles), a non-exhaustive list of guiding principles formulated as a living document that builds on the OECD AI Principles to take account of recent developments in advanced AI systems. The G7 stresses that these principles should apply to all AI actors, when and as applicable to cover the design, development, deployment and use of advanced AI systems.

The G7 AI Principles are supported by a voluntary Code of Conduct for Advanced AI Systems (the G7 AI Code of Conduct), which is meant to provide guidance to help seize the benefits and address the risks and challenges brought by these technologies.

The G7 AI Principles and Code of Conduct came just two days before the start of the UK’s AI Safety Summit 2023. Given that the UK is part of the G7 and has endorsed the G7 Hiroshima Process and its outcomes, the interaction between the G7’s documents, the UK Government’s March 2023 ‘pro-innovation’ approach to AI and its aspirations for the AI Safety Summit deserves some comment.

G7 AI Principles and Code of Conduct

The G7 AI Principles aim ‘to promote safe, secure, and trustworthy AI worldwide and will provide guidance for organizations developing and using the most advanced AI systems, including the most advanced foundation models and generative AI systems.’ The principles are meant to be cross-cutting, as they target ‘among others, entities from academia, civil society, the private sector, and the public sector.’ Importantly, also, the G7 AI Principles are meant to be a stop gap solution, as G7 leaders ‘call on organizations in consultation with other relevant stakeholders to follow these [principles], in line with a risk-based approach, while governments develop more enduring and/or detailed governance and regulatory approaches.’

The principles include the reminder that ‘[w]hile harnessing the opportunities of innovation, organizations should respect the rule of law, human rights, due process, diversity, fairness and non-discrimination, democracy, and human-centricity, in the design, development and deployment of advanced AI system’, as well as a reminder that organizations developing and deploying AI should not undermine democratic values, harm individuals or communities, ‘facilitate terrorism, enable criminal misuse, or pose substantial risks to safety, security, and human rights’. State (AI users) are reminder of their ‘obligations under international human rights law to promote that human rights are fully respected and protected’ and private sector actors are called to align their activities ‘with international frameworks such as the United Nations Guiding Principles on Business and Human Rights and the OECD Guidelines for Multinational Enterprises’.

These are all very high level declarations and aspirations that do not go much beyond pre-existing commitments and (soft) law norms, if at all.

The G7 AI Principles comprises a non-exhaustive list of 11 high-level regulatory goals that organizations should abide by ‘commensurate to the risks’—ie following the already mentioned risk-based approach—which introduces a first element of uncertainty because the document does not establish any methodology or explanation on how risks should be assessed and tiered (one of the primary, and debated, features of the proposed EU AI Act). The principles are the following, prefaced by my own labelling between square brackets:

  1. [risk identification, evaluation and mitigation] Take appropriate measures throughout the development of advanced AI systems, including prior to and throughout their deployment and placement on the market, to identify, evaluate, and mitigate risks across the AI lifecycle;

  2. [misuse monitoring] Patterns of misuse, after deployment including placement on the market;

  3. [transparency and accountability] Publicly report advanced AI systems’ capabilities, limitations and domains of appropriate and inappropriate use, to support ensuring sufficient transparency, thereby contributing to increase accountability.

  4. [incident intelligence exchange] Work towards responsible information sharing and reporting of incidents among organizations developing advanced AI systems including with industry, governments, civil society, and academia.

  5. [risk management governance] Develop, implement and disclose AI governance and risk management policies, grounded in a risk-based approach – including privacy policies, and mitigation measures, in particular for organizations developing advanced AI systems.

  6. [(cyber) security] Invest in and implement robust security controls, including physical security, cybersecurity and insider threat safeguards across the AI lifecycle.

  7. [content authentication and watermarking] Develop and deploy reliable content authentication and provenance mechanisms, where technically feasible, such as watermarking or other techniques to enable users to identify AI-generated content.

  8. [risk mitigation priority] Prioritize research to mitigate societal, safety and security risks and prioritize investment in effective mitigation measures.

  9. [grand challenges priority] Prioritize the development of advanced AI systems to address the world’s greatest challenges, notably but not limited to the climate crisis, global health and education.

  10. [technical standardisation] Advance the development of and, where appropriate, adoption of international technical standards.

  11. [personal data and IP safeguards] Implement appropriate data input measures and protections for personal data and intellectual property.

Each of the principles is accompanied by additional guidance or precision, where possible, and this is further developed in the G7 Code of Conduct.

In my view, the list is a bit of a mixed bag.

There are some very general aspirations or steers that can hardly be considered principles of AI regulation, for example principle 9 setting a grand challenges priority and, possibly, principle 8 setting a risk mitigation priority beyond the ‘requirements’ of principle 1 on risk identification, evaluation and mitigation—which thus seems to boil down to the more specific steer in the G7 Code of Conduct for (private) organisations to ‘share research and best practices on risk mitigation’.

Quite how these principles could be complied by current major AI developers seems rather difficult to foresee, especially in relation to principle 9. Most developers of generative AI or other AI applications linked to eg social media platforms will have a hard time demonstrating their engagement with this principle, unless we accept a general justification of ‘general purpose application’ or ‘dual use application’—which to me seems quite unpalatable. What is the purpose of this principle if eg it pushes organisations away from engaging with the rest of the G7 AI Principles? Or if organisations are allowed to gloss over it in any (future) disclosures linked to an eventual mechanism of commitment, chartering, or labelling associated with the principles? It seems like the sort of purely political aspiration that may have been better left aside.

Some other principles seem to push at an open door, such as principle 10 on the development of international technical standards. Again, the only meaningful detail seems to be in the G7 Code of Conduct, which specifies that ‘In particular, organizations also are encouraged to work to develop interoperable international technical standards and frameworks to help users distinguish content generated by AI from non-AI generated content.’ However, this is closely linked to principle 7 on content authentication and watermarking, so it is not clear how much that adds. Moreover, this comes to further embed the role of industry-led technical standards as a foundational element of AI regulation, with all the potential problems that arise from it (for some discussion from the perspective of regulatory tunnelling, see here and here).

Yet other principles present as relatively soft requirements or ‘noble’ commitments issues that are, in reality, legal requirements already binding on entities and States and that, in my view, should have been placed as hard obligations and a renewed commitment from G7 States to enforce them. These include principle 11 on personal data and IP safeguards, where the G7 Code of Conduct includes as an apparent after thought that ‘Organizations should also comply with applicable legal frameworks’. In my view, this should be starting point.

This reduces the list of AI Principles ‘proper’. But, even then, they can be further grouped and synthesised, in my view. For example, principles 1 and 5 are both about risk management, with the (outward-looking) governance layer of principle 5 seeking to give transparency to the (inward-looking) governance layer in principle 1. Principle 2 seems to simply seek to extend the need to engage with risk-based management post-market placement, which is also closely connected to the (inward-looking) governance layer in principle 1. All of them focus on the (undefined) risk-based approach to development and deployment of AI underpinning the G7’s AI Principles and Code of Conduct.

Some aspects of the incident intelligence exchange also relate to principle 1, while some other aspects relate to (cyber) security issues encapsulated in principle 6. However, given that this principle may be a placeholder for the development of some specific mechanisms of collaboration—either based on cyber security collaboration or other approaches, such as the much touted aviation industry’s—it may be treated separately.

Perhaps, then, the ‘core’ AI Principles arising from the G7 document could be trimmed down to:

  • Life-cycle risk-based management and governance, inclusive of principles 1, 2, and 5.

  • Transparency and accountability, principle 3.

  • Incident intelligence exchange, principle 4.

  • (Cyber) security, principle 6.

  • Content authentication and watermarking, principle 7 (though perhaps narrowly targeted to generative AI).

Most of the value in the G7 AI Principles and Code of Conduct thus arises from the pointers for collaboration, the more detailed self-regulatory measures, and the more specific potential commitments included in the latter. For example, in relation to the potential AI risks that are identified as potential targets for the risk assessments expected of AI developers (under guidance related to principle 1), or the desirable content of AI-related disclosures (under guidance related to principle 3).

It is however unclear how these principles will evolve when adopted at the national level, and to what extent they offer a sufficient blueprint to ensure international coherence in the development of the ‘more enduring and/or detailed governance and regulatory approaches’ envisaged by G7 leaders. It seems for example striking that both the EU and the UK have supported these principles, given that they have relatively opposing approaches to AI regulation—with the EU seeking to finalise the legislative negotiations on the first ‘golden standard’ of AI regulation and the UK taking an entirely deregulatory approach. Perhaps this is in itself an indication that, even at the level of detail achieved in the G7 AI Code of Conduct, the regulatory leeway is quite broad and still necessitates significant further concretisation for it to be meaningful in operational terms—as evidenced eg by the US President’s ‘Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence’, which calls for that concretisation and provides a good example of the many areas for detailed work required to translate high level principles into actionable requirements (even if it leaves enforcement still undefined).

How do the G7 Principles compare to the UK’s ‘pro-innovation’ ones?

In March 2023, the UK Government published its white paper ‘A pro-innovation approach to AI regulation’ (the ‘UK AI White Paper’; for a critique, see here). The UK AI White Paper indicated (at para 10) that its ‘framework is underpinned by five principles to guide and inform the responsible development and use of AI in all sectors of the economy:

  • Safety, security and robustness

  • Appropriate transparency and explainability

  • Fairness

  • Accountability and governance

  • Contestability and redress’.

A comparison of the UK and the G7 principles can show a few things.

First, that there are some areas where there seems to be a clear correlation—in particular concerning (cyber) security as a self-standing challenge requiring a direct regulatory focus.

Second, that it is hard to decide at which level to place incommensurable aspects of AI regulation. Notably, the G7 principles do not directly refer to fairness—while the UK does. However, the G7 Principles do spend some time in the preamble addressing the issue of fairness and unacceptable AI use (though in a woolly manner). Whether placing this type of ‘requirement’ at a level or other makes a difference (at all) is highly debatable.

Third, that there are different ways of ‘packaging’ principles or (soft) obligations. Just like some of the G7 principles are closely connected or fold into each other (as above), so do the UK’s principles in relation to the G7’s. For example, the G7 packaged together transparency and accountability (principle 3), while the UK had them separated. While the UK explicitly mentioned the issue of AI explainability, this remains implicit in the G7 principles (also in principle 3).

Finally, in line with the considerations above, that distinct regulatory approaches only emerge or become clear once the ‘principles’ become specific (so they arguably stop being principles). For example, it seems clear that the G7 Principles aspire to higher levels of incident intelligence governance and to a specific target of generative AI watermarking than the UK’s. However, whether the G7 or the UK principles are equally or more demanding on any other dimension of AI regulation is close to impossible to establish. In my view, this further supports the need for a much more detailed AI regulatory framework—else, technical standards will entirely occupy that regulatory space.

What do the G7 AI Principles tell us about the UK’s AI Safety Summit?

The Hiroshima Process that has led to the adoption of the G7 AI Principles and Code of Conduct emerged from the Ministerial Declaration of The G7 Digital and Tech Ministers’ Meeting of 30 April 2023, which explicitly stated that:

‘Given that generative AI technologies are increasingly prominent across countries and sectors, we recognise the need to take stock in the near term of the opportunities and challenges of these technologies and to continue promoting safety and trust as these technologies develop. We plan to convene future G7 discussions on generative AI which could include topics such as governance, how to safeguard intellectual property rights including copyright, promote transparency, address disinformation, including foreign information manipulation, and how to responsibly utilise these technologies’ (at para 47).

The UK Government’s ambitions for the AI Safety Summit largely focus on those same issues, albeit within the very narrow confines of ‘frontier AI’, which it has defined as ‘highly capable general-purpose AI models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models‘. While the UK Government has published specific reports to focus discussion on (1) Capabilities and risks from frontier AI and (2) Emerging Processes for Frontier AI Safety, it is unclear how the level of detail of such narrow approach could translate into broader international commitments.

The G7 AI Principles already claim to tackle ‘the most advanced AI systems, including the most advanced foundation models and generative AI systems (henceforth "advanced AI systems")’ within their scope. It seems unclear that such approach would be based on a lack of knowledge or understanding of the detail the UK has condensed in those reports. It rather seems that the G7 was not ready to move quickly to a level of detail beyond that included in the G7 AI Code of Conduct. Whether significant further developments can be expected beyond the G7 AI Principles and Code of Conduct just two days after they were published seems hard to fathom.

Moreover, although the UK Government is downplaying the fact that eg Chinese participation in the AI Safety Summit is unclear and potentially rather marginal, it seems that, at best, the UK AI Safety Summit will be an opportunity for a continued conversation between G7 countries and a few others. It is also unclear whether significant progress will be made in a forum that seems rather clearly tilted towards industry voice and influence.

Let’s wait and see what the outcomes are, but I am not optimistic for significant progress other than, worryingly, a risk of further displacement of regulatory decision-making towards industry and industry-led (future) standards.

More model contractual AI clauses -- some comments on the SCL AI Clauses

Following the launch of the final version of the model contractual AI clauses sponsored by the European Commission earlier this month, the topic of how to develop and how to use contractual model clauses for AI procurement is getting hotter. As part of its AI Action Plan, New York City has announced that it is starting work to develop its own model clauses for AI procurement (to be completed in 2025). We can expect to see a proliferation of model AI clauses as more ‘AI legislation’ imposes constraints on contractual freedom and compliance obligations, and as different model clauses are revised to (hopefully) capture the learning from current experimentation in AI procurement.

Although not (closely) focused on procurement, a new set of interesting AI contractual clauses has been released by the Society for Computers & Law (SCL) AI Group (thanks to Gisele Waters for bringing them to my attention on LinkedIn!). In this post, I reflect on some aspects of the SCL AI clauses and try to answer Gisele’s question/challenge (below).

SCL AI Clauses

The SCL AI clauses have a clear commercial orientation and are meant as a starting point for supplier-customer negotiations, which is reflected on the fact that the proposed clauses contain two options: (1) a ‘pro-supplier’ drafting based on off-the-shelf provision, and (2) a ‘pro-customer’ drafting based on a bespoke arrangement. Following that commercial logic, most of the SCL AI clauses focus on an allocation of obligations (and thus costs and liability) between the parties (eg in relation to compliance with legal requirements).

The clauses include a few substantive requirements implicit in the allocation of the respective obligations (eg on data or third party licences) but mostly refer to detailed schedules of which there is no default proposal, or to industry standards (and thus have this limitation in common with eg the EU’s model AI clauses). The SCL AI clauses do contain some drafting notes that would help identify issues needing specific regulation in the relevant schedules, although this guidance necessarily remains rather abstract or generic.

This pro-supplier/pro-customer orientation prompted Gisele’s question/challenge, which is whether ‘there is EVER an opportunity for government (customer-buyer) to be better able to negotiate the final language with clauses like these in order to weigh the trade offs between interests?’, especially bearing in mind that the outcome of the negotiations could be strongly pro-supplier, strongly pro-customer, or balanced (and something in between those). I think that answering this question requires exploring what pro-supplier or pro-customer may mean in this specific context.

From a substantive regulation perspective, the SCL AI clauses include a few interesting elements, such as an obligation to establish a circuit-breaker capable of stopping the AI (aka an ‘off button’) and a roll-back obligation (to an earlier, non-faulty version of the AI solution) where the AI is malfunctioning or this is necessary to comply with applicable law. However, most of the substantive obligations are established by reference to ‘Good Industry Practice’, which requires some further unpacking.

SCL AI Clauses and ‘Good Industry Practice’

Most of crucial proposed clauses refer to the benchmark of ‘Good Industry Practice’ as a primary qualifier for the relevant obligations. The proposed clause on explainability is a good example. The SCL AI clause (C1.15) reads as follows:

C1.15 The Supplier will ensure that the AI System is designed, developed and tested in a way which ensures that its operation is sufficiently transparent to enable the Customer to understand and use the AI System appropriately. In particular, the Supplier will produce to the Customer, on request, information which allows the Customer to understand:

C1.15.1 the logic behind an individual output from the AI System; and

C1.15.2 in respect of the AI System or any specific part thereof, which features contributed most to the output of the AI System, in each case, in accordance with Good Industry Practice.

A first observation is that the SCL AI clauses seem to presume that off-the-shelf AI solutions would not be (necessarily) explainable, as they include no clause under the ‘pro-supplier’ version.

Second, the ‘pro-customer’ version both limits the types of explanation that would be contractually owed (to a model-level or global explanation under C1.15.2 and a limited decision-level or local explanation under C1.15.1 — which leaves out eg a counterfactual explanation, as well as not setting any specific requirements on how the explanation needs to be produced, eg is a ‘post hoc’ explanation acceptable and if so how should it be produced?) and qualifies it in two important ways: (1) the overall requirement is that the AI system’s operation should be ‘sufficiently transparent’, with ‘sufficient’ creating a lot of potential issues here; and, (2) the reference to ‘Good Industry Practice’ [more on this below].

The issue of transparency is similarly problematic in its more general treatment under another specific clause (C4.6), which also only has a ‘pro-customer’ version:

C4.6 The Supplier warrants that, so far as is possible [to achieve the intended use of the AI System / comply with the Specification], the AI System is transparent and interpretable [such that its output can be traced back to the input data] .

The qualifier ‘so far as is possible’ is again potentially quite problematic here, as are the open-ended references to transparency and interpretability of the system (with a potential conflict between interpretability for the purposes of this clause and explainability under C1.15).

What I find interesting about this clause is that the drafting notes explain that:

… the purpose of this provision is to ensure that the Supplier has not used an overly-complex algorithm if this is unnecessary for the intended use of the AI System or to comply with its Specification. That said, effectiveness and accuracy are often trade-offs for transparency in AI models.

From this perspective, I think the clause should be retitled and entirely redrafted to make explicit that the purpose is to establish a principle of ‘AI minimisation’ in the sense of the supplier guaranteeing that the AI system is the least complex that can provide the desired functionality — which, of course, has the tricky issue of trade-off and the establishment of the desired functionality in itself to work around. (and which in a procurement context would have been dealt with pre-contract, eg in the context of technical specifications and/or tender evaluation). Interestingly, this issue is another one where reference could be made to ‘Good Industry Practice’ if one accepted that it should be best practice to always use the most explainable/interpretable and most simple model available for a given task.

As mentioned, reference to ‘Good Industry Practice’ is used extensively in the SCL AI clauses, including crucial issues such as: explainability (above), user manual/user training, preventing unlawful discrimination, security (which is inclusive of cyber secturity and some aspects of data protection/privacy), or quality standards. The drafting notes are clear that

… while parties often refer to ‘best practice’ or ‘good industry practice’, these standards can be difficult to apply in developing industry. Accordingly a clear Specification is required, …

Which is the reason why the SCL AI clauses foresee that ‘Good Industry Practice’ will be a defined contract term, whereby the parties will specify the relevant requirements and obligations. And here lies the catch.

Defining ‘Good Industry Practice’?

In the SCL AI clauses, all references to ‘Good Industry Practice’ are used as qualifiers in the pro-customer version of the clauses. It is possible that the same term would be of relevance to establishing whether the supplier had discharged its reasonable duties/best efforts under the pro-supplier version (where the term would be defined but not explicitly used). In both cases, the need to define ‘Good Industry Practice’ is the Achilles heel of the model clauses, as well as a potential Trojan horse for customers seeking a seemingly pro-customer contractual design,

The fact is that the extent of the substantive obligations arising from the contract will entirely depend on how the concept of ‘Good Industry Practice’ is defined and specified. This leaves even seemingly strongly ‘pro-customer’ contracts exposed to weak substantive protections. The biggest challenge for buyers/procurers of AI will be that (1) it will be hard to know how to define the term and what standards to refer to, and (2) it will be difficult to monitor compliance with the standards, especially where those establish eg mechanisms of self-asessment by the tech supplier as the primary or sole quality control mechanims.

So, my answer to Gisele’s question/challenge would be that the SCL AI clauses, much like the EU’s, do not (and cannot?) go far enough in ensuring that the contract for the procurement/purchase of AI embeds adequate substantive requirements. The model clauses are helpful in understanding who needs to do what when, and thus who shoulders the relevant cost and risk. But they do not address the all-important question of how it needs to be done. And that is the crucial issue that will determine whether the contract (and the AI solution) really is in the public buyer’s interest and, ultimately in the public interest.

In a context where tech providers (almost always) have the upper hand in negotiations, this foundational weakness is all important, as suppliers could well ‘agree to pro-customer drafting’ and then immediately deactivate it through the more challenging and technical definition (and implementation) of ‘Good Industry Practices’.

That is why I think we need to cover this regulatory tunnelling risk and this foundational shortcoming of ‘AI regulation by contract’ by creating clear and binding requirements on the how (ie the ‘good (industry) practice’ or technical standards). The emergence of model AI contract clauses to me makes it clear that the most efficient contract design is such that it needs to refer to external benchmarks. Establishing adequarte protections and an adequate balance of risks and benefits (from a social perspective) hinges on this. The contract can then deal with an apportionment of the burdens, obligations, costs and risks stemming from the already set requirements.

So I would suggest that the focus needs to be squarely on developing the regulatory architecture that will lead us to the development of such mandatory requirements and standards for the procurement and use of AI by the public sector — which may then become adequate good industry practice for strictly commercial or private contracts. My proposal in that regard is sketched out here.

Final EU model contractual AI Clauses available -- some thoughts on regulatory tunnelling

Source: https://tinyurl.com/mrx9sbz8.

The European Commission has published the final version of the EU model contractual AI clauses to pilot in procurements of AI, which have been ‘developed for pilot use in the procurement of AI with the aim to establish responsibilities for trustworthy, transparent, and accountable development of AI technologies between the supplier and the public organisation.’

The model AI clauses have been developed by reference to the (future) obligations arising from the EU AI Act currently under advanced stages of negotiation. This regulatory technique simply seeks to allow public buyers to ensure compliance with the EU AI Act by cascading the relevant obligations and requirements down to tech providers (largely on a back to back basis). By the same regulatory logic, this technique will be a conveyor belt for the shortcomings of the EU AI Act, which will be embedded in public contracts using the clauses. It is thus important to understand the shortcomings inherent to this approach and to the model AI clauses, before assuming that their use will actually ensure the ‘trustworthy, transparent, and accountable development [and deployment] of AI technologies’. Much more is needed than mere reliance on the model AI clauses.

Two sets of model AI clauses

The EU AI Act will not be applicable to all types of AI use. Remarkably, most requirements will be limited to ‘high-risk AI uses’ as defined in its Article 6. This immediately translates into the generation of two sets of model AI clauses: one for ‘high-risk’ AI procurement, which embeds the requirements expected to arise from the EU AI Act once finalised, and another ‘light version’ for non-high-risk AI procurement, which would support the voluntary extension of some of those requirements to the procurement of AI for other uses, or even to the use of other types of algorithmic solutions not meeting the regulatory definition of AI.

A first observation is that the controversy surrounding the definition of ‘high-risk’ in the EU AI Act immediately carries over to the model AI clauses and to the choice of ‘demanding’ vs light version. While the original proposal of the EU AI Act contained a numerus clausus of high-risk uses (which was already arguably too limited, see here), the trilogue negotiations could well end suppressing a pre-defined classification and leaving it to AI providers to (self)assess whether the use would be ‘high-risk’.

This has been heavily criticised in a recent open letter. If the final version of the EU AI Act ended up embedding such a self-assessment of what uses are bound to be high-risk, there would be clear risks of gaming of the self-assessment to avoid compliance with the heightened obligations under the Act (and it is unclear that the system of oversight and potential fines foreseen in the EU AI Act would suffice to prevent this). This would directly translate into a risk of gaming (or strategic opportunism) in the choice between ‘demanding’ vs light version of the model AI clauses by public buyers as well.

As things stand today, it seems that most procurement of AI will be subject to the light version of the model AI clauses, where contracting authorities will need to decide which clauses to use and which standards to refer to. Importantly, the light version does not include default options in relation to quality management, conformity assessments, corrective actions, inscription in an AI register, or compliance and audit (some of which are also optional under the ‘demanding’ model). This means that, unless public buyers are familiar with both sets of model AI clauses, taking the light version as a starting point already generates a risk of under-inclusiveness and under-regulation.

Limitations in the model AI clauses

The model AI clauses come with some additional ‘caveat emptor’ warnings. As the Commission has stressed in the press release accompanying the model AI clauses:

The EU model contractual AI clauses contain provisions specific to AI Systems and on matters covered by the proposed AI Act, thus excluding other obligations or requirements that may arise under relevant applicable legislation such as the General Data Protection Regulation. Furthermore, these EU model contractual AI clauses do not comprise a full contractual arrangement. They need to be customized to each specific contractual context. For example, EU model contractual AI clauses do not contain any conditions concerning intellectual property, acceptance, payment, delivery times, applicable law or liability. The EU model contractual AI clauses are drafted in such a way that they can be attached as a schedule to an agreement in which such matters have already been laid down.

This is an important warning, as the sole remit of the model AI clauses links back to the EU AI Act and, in the case of the light version, only partially.

the link between model AI clauses and standards

However, the most significant shortcoming of the model AI clauses is that, by design, they do not include any substantive or material constraints or requirements on the development and use of AI. All substantive obligations are meant to be incorporated by reference to the (harmonised) standards to be developed under the EU AI Act, other sets of standards or, more generally, the state-of-the-art. Plainly, there is no definition or requirement in the model AI clauses that establishes the meaning of eg trustworthiness—and there is thus no baseline safety net ensuring it. Similarly, most requirements are offloaded to (yet to emerge) standards or the technical and organisational measures devised by the parties. For example,

  • Obligations on record-keeping (Art 5 high-risk model) refer to capabilities conforming ‘to state of the art and, if available, recognised standards or common specifications. <Optional: add, if available, a specific standard>’.

  • Measures to ensure transparency (Art 6 high-risk model) are highly qualified: ‘The Supplier ensures that the AI System has been and shall be designed and developed in such a way that the operation of the AI System is sufficiently transparent to enable the Public Organisation to reasonably understand the system’s functioning’. Moreover, the detail of the technical and organisational measures that need to be implemented to reach those (qualified) goals is left entirely undefined in the relevant Annex (E) — thus leaving the option open for referral to emerging transparency standards.

  • Measures on human oversight (Art 7 high-risk model) are also highly qualified: ‘The Supplier ensures that the AI System has been and shall be designed and developed in such a way, including with appropriate human-machine interface tools, that it can be effectively overseen by natural persons as proportionate to the risks associated with the system’. Although there is some useful description of what ‘human oversight’ should mean as a minimum (Art 7(2)), the detail of the technical and organisational measures that need to be implemented to reach those (qualified) goals is also left entirely undefined in the relevant Annex (F) — thus leaving the option open for referral to emerging ‘human on the loop’ standards.

  • Measures on accuracy, robustness and cybersecurity (Art 8 high-risk model) follow the same pattern. Annexes G and H on levels of accuracy and on measures to ensure an appropriate level of robustness, safety and cybersecurity are also blank. While there can be mandatory obligations stemming from other sources of EU law (eg the NIS 2 Directive), only partial aspects of cybersecurity will be covered, and not in all cases.

  • Measures on the ‘explainability’ of the AI (Art 13 high-risk model) fall short of imposing an absolute requirement of intelligibility of the AI outputs, as the focus is on a technical explanation, rather than a contextual or intuitive explanation.

All in all, the model AI clauses are primarily an empty regulatory shell. Operationalising them will require reliance on (harmonised) standards—eg on transparency, human oversight, accuracy, explainability … — or, most likely (at least until such standards are in place) significant additional concretisation by the public buyer seeking to rely on the model AI clauses.

For the reasons identified in my previous research, I think this is likely to generate regulatory tunnelling and to give the upper hand to AI providers in making sure they can comfortably live with requirements in any specific contract. The regulatory tunnelling stems from the fact that all meaningful requirements and constraints are offloaded to the (harmonised) standards to be developed. And it is no secret that the governance of the standardisation process falls well short of ensuring that the resulting standards will embed high levels of protection of the desired regulatory goals — some of which are very hard to define in ways that can be translated into procurement or contractual requirements anyway.

Moreover, public buyers with limited capabilities will struggle to use the model AI clauses in ways that meaningfully ‘establish responsibilities for trustworthy, transparent, and accountable development [and deployment] of AI technologies’—other than in relation to those standards. My intuition is that the content of the all too relevant schedules in the model AI clauses will either simply refer to emerging standards or where there is no standard or the standard is for whatever reason considered inadequate, be left for negotiation with tech providers, or be part of the evaluation (eg tenderers will be required to detail how they propose to regulate eg accuracy). Whichever way this goes, this puts the public buyer in a position of rule-taker.

Only very few, well-resourced, highly skilled public buyers (if any) would be able to meaningfully flesh out a comprehensive set of requirements in the relevant annexes to give the model AI clauses sufficient bite. And they would not benefit much from the model AI clauses as it is unlikely that in their sophistication they would not have already come up with similar solutions. Therefore, at best, the contribution of the model AI clauses is rather marginal and, at worse, it comes with a significant risk of regulatory complacency.

final thoughts

indeed, given all of this, it is clear that the model IA clauses generate a risk if (non-sophisticated/most) public buyers think that relying on them will deal with the many and complex challenges inherent to the acquisition of AI. And an even bigger risk if we collectively think that the existence of such model AI clauses is all the regulation of AI procurement we need. This is not a criticism of the clauses in themselves, but rather of the technique of ‘regulation by contract’ that underlies it and of the broader approach followed by the European Commission and other regulators (including the UK’s)!

I have demonstrated how this is a flawed regulatory strategy in my forthcoming book Digital Technologies and Public Procurement. Gatekeeping and Experimentation in Digital Public Governance (OUP) and in many working papers resulting from the project with the same title. In my view, we need to do a lot more if we want to make sure that the public sector only procures and uses trustworthy AI technologies. We need to create a regulatory system that assigns to an independent authority both the permissioning of the procurement of AI and the certification of the standards underpinning such procurement. In the absence of such regulatory developments, we cannot meaningfully claim that the procurement of AI will be in line with the values and goals to be expected from ‘responsible’ AI use.

I will further explore these issues in a public lecture on 23 November 2023 at University College London. All welcome: Hybrid | Responsibly Buying Artificial Intelligence: A Regulatory Hallucination? | UCL Faculty of Laws - UCL – University College London.

Source: https://public-buyers-community.ec.europa....

Some thoughts on the need to rethink the right to good administration in the digital context

Colleagues at The Digital Constitutionalist have put together a really thought-provoking symposium on ‘Safeguarding the Right to Good Administration in the Age of AI’. I had the pleasure of contributing my own views on the need to extend and broaden good administration guarantees in the context of AI-assisted decision-making. I thoroughly recommend reading all contributions to the symposium, as this is an area of likely development in the EU Administrative Law space.

Purchasing uncertain or indefinite requirements – guest post by Șerban Filipon

I am delighted to present to How to Crack a Nut readers an outline of my recently published book Framework Agreements, Supplier Lists and Other Public Procurement Tools: Purchasing Uncertain or Indefinite Requirements (Hart Publishing, 2023). It is the result of years of doctoral research in public procurement law and policy at the University of Nottingham, and it incorporates my practical experience as well. After the end of the PhD, I updated and further developed the research for publication as a monograph.

Framework agreements, supplier lists, ID/IQ contracts, dynamic purchasing systems, and other tools of this kind are very widely used throughout the world; and tend to be quite complex too in some respects. Paradoxically, the subject has so far received rather limited attention, particularly when it comes to analysing the phenomenon systematically, across a variety of (very) different public procurement systems and/or international instruments. The book covers this gap, mainly through legal contextual analysis with comparative perspectives.

If in your professional or academic activity you come across questions involving matters of the kind presented below, then you are very likely to benefit from reading and studying this book.

Topics covered in the book

Given the complexity and multiple dimensions of the subject, I have structured some examples of possible questions/matters into a few categories, for illustration purposes, but please take an open and flexible view when going through them (as there is much more covered in the book!).

(A) Regulation and policy

  • aspects to consider when regulating (or seeking to improve the regulation of and policy regarding) tools for procurement of recurrent, uncertain, or indefinite requirements, in order to support the wider objectives of your relevant public procurement system, including (where applicable) how such regulation should be integrated with other existing regulation, for instance, with regulation mainly focused on ‘one-off’ purchases;

  •  what can be learnt from various procurement systems or international instruments, and how can (certain) approaches or elements in those systems become relevant when regulating your system, including through adaptation and conceptual streamlining;

  •  addressing legal review in relation to tools for procurement of recurrent, uncertain, or indefinite requirements.

(B) Regulatory interpretation and application

  • how can existing regulation on framework agreements, supplier lists, etc, be interpreted/applied in relation to areas where such regulation is contradictory, inconsistent, or silent;

  • to what extent is / should the general procurement regulation (usually relating to ‘one-off’ procurements) be applicable to framework agreements, supplier lists, etc, and addressing ‘grey’ or ambiguous areas in this interaction.

(C) Practice and operations

  • designing and planning the type of procurement arrangement (tool) that could be appropriate for specific circumstances, i.e., framework agreement or supplier list, and downstream, the sub-type/configuration of framework or supplier list, and its relevant features (choosing between various possible options), thus supporting procurement portfolio planning and implementation at the purchaser’s level; considerations on operating the designed arrangement;

  • what criteria and procedures could be used for awarding call-offs under a framework agreement without reopening competition at call-off stage (and using these in a balanced and appropriate way, depending on circumstances);

  • to what extent could a call-off under, say a framework arrangement, consist in a (secondary) framework, the conditions that should be taken into consideration for this approach, and circumstances when it can be useful.

(D) Research, education, and training 

  • conceptual realignment, redefining, and adjustment to facilitate understanding of the phenomenon across various public procurement systems that regulate, address and classify (very) differently the arrangements/tools for procurement of uncertain or indefinite requirements;

  • taxonomies of potential arrangements, and identifying potential arrangements currently not expressly provided for in regulation;

  • conceptual framework for analysing procurement of uncertain or indefinite requirements – across various procurement systems or international instruments, or using a 360-degree perspective concerning a specific system or tool, rather than a perspective confined to a specific procurement system.

Scope of the research

These types of questions give a flavour of what the book does and its approach; certainly the book covers much more and offers an in-depth appreciation of this vital topic across public procurement systems and legal instruments.

To achieve this, and to be of wide relevance throughout the world, this monograph analyses in-depth seven different public procurement systems, using the same structure of analysis. The choice of systems and/or international legal instruments was carefully made to support such relevance, by taking into account a mixture of: legal and administrative traditions; experience with public procurement and public procurement regulation; specific experience in regulating and using procurement tools for recurrent, uncertain, or indefinite requirements; and of objectives pursued through public procurement regulation.

The book thus looks specifically and in context at: the UNCITRAL Model Law on public procurement, the World Bank’s procurement rules and policy for investment project financing, the US federal procurement system, the EU public procurement law and policy, and its transposition in two current EU member states – France and Romania – and the UK pre- and post-Brexit.

Systematic approach

By using the same structure for analysis both vertically (into each relevant tool under each procurement system or legal instrument investigated), as well as transversally, across all tools, systems and legal instruments investigated, the book discovers and reveals a whole ‘universe’ of approaches (current and potential) towards procurement of recurrent, uncertain, or indefinite requirements. The book presents this ‘universe’ in a clear and orderly fashion that is meaningful for readers anywhere in the world, and, on this basis the book articulates a discipline (a conceptual framework) for analysing and addressing the regulation of and policy on procurement of recurrent, uncertain, or indefinite requirements.

The purpose of this newly articulated discipline is both to offer an understanding of the overall phenomenon investigated (within and across the systems and legal instruments analysed in the book), and to enable the design and development of bespoke solutions concerning the regulation, policy, and practice of procurement of recurrent, uncertain, or indefinite requirements. By bespoke solutions in this context, I mean solutions that are relevant to and respond to the specific features and objectives of the procurement system in question or of the specific procurement exercise in question. From this perspective, I consider the book is of interest both for readers working in the procurement systems specifically analysed by the monograph, as well as for readers in many, many other procurement systems worldwide.

Main arguments and findings

With the vast coverage, complexity and variety of systems analysed, the arguments of the book (as well as findings) are multi-dimensional. The main ones are outlined here.

Firstly, I argue that whilst significant developments have occurred in this area of procurement of recurrent, uncertain, or indefinite requirements during the last decades, regulation in all systems / legal instruments analysed continues to be, in various ways and to various degrees, work in progress. To unleash the potential that these arrangements have for enhanced efficiency and effectiveness in public procurement, more balanced regulation is needed, and more work is needed on regulatory, policy as well as implementation matters.

The systems and legal instruments researched by the monograph tend to leave aside various potential configurations of arrangements, either by way of prohibiting them or by not expressly providing for them. Thus, a second main argument I make is that wider categories/configurations (or ranges) of potential arrangements should be expressly permitted in regulation, but subject to further – specifically tailored – regulatory controls and conditions concerning their use. These include procedural and transparency measures (which can be facilitated nowadays thanks to electronic means), as well as legal review and oversight mechanisms designed (and provided for in the relevant legal instrument) to address the specific matters that may arise in preparing and operating arrangements for procurement of uncertain or indefinite requirements.

Certainly, any such expansion of coverage as well as the specific safeguards referred above would be different (and differently approached) from system to system, so as to fit and respond to the relevant procurement context.

With a couple of notable exceptions, a trend in many of the systems or international instruments investigated in the book has reflected reluctance toward recognising and permitting the general use of supplier lists type of arrangements (like qualification systems in the EU utilities sector). The third argument I make here is that this approach is unjustified, and in fact it precludes purchasers from using a tool that can be particularly useful in certain situations if it is subject to appropriate procedural, transparency, and legal review measures, as discussed above.

Conversely – with the notable exception of the UNCITRAL Model Law on public procurement that can be regarded as a benchmark in many respects concerning the regulation of framework arrangements – a rather lax approach seems to govern framework type of arrangements. Regulation in many of the systems investigated in the monograph tend to permit a rather liberal use of framework arrangements, with insufficient conditions and/or controls in various respects, which can affect their beneficial use and/or may foster abuse. However, in other respects, the regulation could be too rigid. So, in addition to the need for more balanced regulation, my fourth argument relates to encouraging the use of framework arrangements for security of supply, and for planning for and responding to crises (catastrophic events), rather than mainly (just) for aggregation of (recurrent) demand, economies of scale, and administrative convenience.

Finally, I argue that all the above can be significantly supported by developing a specific area of public procurement regulation, to address – expressly, systematically, and directly – the complexities and features of procuring uncertain or indefinite requirements. In contrast, so far, the procurement systems / legal instruments analysed tend to address many issues arising from procurement of recurrent, uncertain, or indefinite requirements, indirectly through the lenses of ‘one-off’ procurements, by way of exception – or by implication – from the rules on ‘one-off’ procurements.

In my view, fundamental changes in approaching regulation and policy of framework arrangements and supplier lists in public procurement are strongly needed, as explained above. The sooner they occur, the better the chances for improvement in efficiency and effectiveness in and through public procurement.

For those wishing to deepen their understanding of this area, I am very pleased to attach here a voucher that provides a 20% discount of the book price. The book can be ordered using this link (and inserting the relevant discount code shown in the voucher.

I wish you an enjoyable and, most importantly, useful reading!

Șerban Filipon

Șerban Filipon

With over 20 years of international experience in public procurement professional consulting services, including procurement reform, capacity building, and implementation, as well as in procurement management and research, Șerban Filipon (MCIPS) holds a PhD in public procurement law from the University of Nottingham, UK (2018), and an MSc in Procurement Management awarded with distinction by the University of Strathclyde, UK (2006).

Șerban Filipon is senior procurement consultant.

Digital technologies and public procurement -- new monograph available for pre-order

My forthcoming monograph Digital Technologies and Public Procurement: Gatekeeping and Experimentation in Digital Public Governance is now advertised and ready for pre-order from OUP’s website.

Here is the book description, in case of interest:

The digital transformation of the public sector has accelerated. States are experimenting with technology, seeking more streamlined and efficient digital government and public services. However, there are significant concerns about the risks and harms to individual and collective rights under new modes of digital public governance. Several jurisdictions are attempting to regulate digital technologies, especially artificial intelligence, however regulatory effort primarily concentrates on technology use by companies, not by governments. The regulatory gap underpinning public sector digitalisation is growing.

As it controls the acquisition of digital technologies, public procurement has emerged as a 'regulatory fix' to govern public sector digitalisation. It seeks to ensure through its contracts that public sector digitalisation is trustworthy, ethical, responsible, transparent, fair, and (cyber) safe.

However, in Digital Technologies and Public Procurement: Gatekeeping and Experimentation in Digital Public Governance, Albert Sanchez-Graells argues that procurement cannot perform this gatekeeping role effectively. Through a detailed case study of procurement digitalisation as a site of unregulated technological experimentation, he demonstrates that relying on 'regulation by contract' creates a false sense of security in governing the transition towards digital public governance. This leaves the public sector exposed to the 'policy irresistibility' that surrounds hyped digital technologies.

Bringing together insights from political economy, public policy, science, technology, and legal scholarship, this thought-provoking book proposes an alternative regulatory approach and contributes to broader debates of digital constitutionalism and digital technology regulation.

AI in the public sector: can procurement promote trustworthy AI and avoid commercial capture?

The recording and slides of the public lecture on ‘AI in the public sector: can procurement promote trustworthy AI and avoid commercial capture?’ I gave at the University of Bristol Law School on 4 July 2023 are now available. As always, any further comments most warmly received at: a.sanchez-graells@bristol.ac.uk.

This lecture brought my research project to an end. I will now focus on finalising the manuscript and sending it off to the publisher, and then take a break for the rest of the summer. I will share details of the forthcoming monograph in a few months. I hope to restart blogging in September. in the meantime, I wish all HTCaN friends all the best. Albert

Two policy briefings on digital technologies and procurement

Now that my research project ‘Digital technologies and public procurement. Gatekeeping and experimentation in digital public governance’ nears its end, some outputs start to emerge. In this post, I would like to highlight two policy briefings summarising some of my top-level policy recommendations, and providing links to more detailed analysis. All materials are available in the ‘Digital Procurement Governance’ tab.

Policy Briefing 1: ‘Guaranteeing public sector adoption of trustworthy AI - a task that should not be left to procurement

What's the rush -- some thoughts on the UK's Foundation Model Taskforce and regulation by Twitter

I have been closely following developments on AI regulation in the UK, as part of the background research for the joint submission to the public consultation closing on Wednesday (see here and here). Perhaps surprisingly, the biggest developments do not concern the regulation of AI under the devolved model described in the ‘pro-innovation’ white paper, but its displacement outside existing regulatory regimes—both in terms of funding, and practical power.

Most of the activity and investments are not channelled towards existing resource-strained regulators to support them in their task of issuing guidance on how to deal with AI risks and harms—which stems from the white paper—but in digital industrial policy and R&D projects, including a new major research centre on responsible and trustworthy AI and a Foundation Model Taskforce. A first observation is that this type of investments can be worthwhile, but not at the expense of adequately resourcing regulators facing the tall order of AI regulation.

The UK’s Primer Minister is clearly making a move to use ‘world-leadership in AI safety’ as a major plank of his re-election bid in the coming Fall. I am not only sceptical about this move and its international reception, but also increasingly concerned about a tendency to ‘regulate by Twitter’ and to bullish approaches to regulatory and legal compliance that could well result in squandering a good part of the £100m set aside for the Taskforce.

In this blog, I offer some preliminary thoughts. Comments welcome!

Twitter announcements vs white paper?

During the preparation of our response to the AI public consultation, we had a moment of confusion. The Government published the white paper and an impact assessment supporting it, which primarily amount to doing nothing and maintaining the status quo (aka AI regulatory gap) in the UK. However, there were increasing reports of the Prime Minister’s change of heart after the emergence of a ‘doomer’ narrative peddled by OpenAI’s CEO and others. At some point, the PM sent out a tweet that made us wonder if the Government was changing policy and the abandoning the approach of the white paper even before the end of the public consultation. This was the tweet.

We could not locate any document describing the ‘Safe strategy of AI’, so the only conclusion we could reach is that the ‘strategy’ was the short twitter threat that followed that first tweet.

It was not only surprising that there was no detail, but also that there was no reference to the white paper or to any other official policy document. We were probably not the only ones confused about it (or so we hope!) as it is in general very confusing to have social media messaging pointing out towards regulatory interventions completely outside the existing frameworks—including live public consultations by the government!

It is also confusing to see multiple different documents make reference to different things, and later documents somehow reframing what previous documents mean.

For example, the announcement of the Foundation Model Taskforce came only a few weeks after the publication of the white paper, but there was no mention of it in the white paper itself. Is it possible that the Government had put together a significant funding package and related policy in under a month? Rather than whether it is possible, the question is why do things in this way? And how mature was the thinking behind the Taskforce?

For example, the initial announcement indicated that

The investment will build the UK’s ‘sovereign’ national capabilities so our public services can benefit from the transformational impact of this type of AI. The Taskforce will focus on opportunities to establish the UK as a world leader in foundation models and their applications across the economy, and acting as a global standard bearer for AI safety.

The funding will be invested by the Foundation Model Taskforce in foundation model infrastructure and public service procurement, to create opportunities for domestic innovation. The first pilots targeting public services are expected to launch in the next six months.

Less than two months later, the announcement of the appointment of the Taskforce chair (below) indicated that

… a key focus for the Taskforce in the coming months will be taking forward cutting-edge safety research in the run up to the first global summit on AI safety to be hosted in the UK later this year.

Bringing together expertise from government, industry and academia, the Taskforce will look at the risks surrounding AI. It will carry out research on AI safety and inform broader work on the development of international guardrails, such as shared safety and security standards and infrastructure, that could be put in place to address the risks.

Is it then a Taskforce and pot of money seeking to develop sovereign capabilities and to pilot public sector AI use, or a Taskforce seeking to develop R&D in AI safety? Can it be both? Is there money for both? Also, why steer the £100m Taskforce in this direction and simultaneously spend £31m in funding an academic-led research centre on ethical and trustworthy AI? Is the latter not encompassing issues of AI safety? How will all of these investments and initiatives be coordinated to avoid duplication of effort or replication of regulatory gaps in the disparate consideration of regulatory issues?

Funding and collaboration opportunities announced via Twitter?

Things can get even more confusing or worrying (for me). Yesterday, the Government put out an official announcement and heavy Twitter-based PR to announce the appointment of the Chair of the Foundation Model Taskforce. This announcement raises a few questions. Why on Sunday? What was the rush? Also, what was the process used to select the Chair, if there was one? I have no questions on the profile and suitability of the appointed Chair (have also not looked at them in detail), but I wonder … even if legally compliant to proceed without a formal process with an open call for expressions of interest, is this appropriate? Is the Government stretching the parallelism with the Vaccines Taskforce too far?

Relatedly, there has been no (or I have been unable to locate) official call for expressions of interest from those seeking to get involved with the Taskforce. However, once more, Twitter seems to have been the (pragmatic?) medium used by the newly appointed Chair of the Taskforce. On Sunday itself, this Twitter thread went out:

I find the last bit particularly shocking. A call for expressions of interest in participating in a project capable of spending up to £100m via Google Forms! (At the time of writing), the form is here and its content is as follows:

I find this approach to AI regulation rather concerning and can also see quite a few ways in which the emerging work approach can lead to breaches of procurement law and subsidies controls, or recruitment processes (depending on whether expressions of interest are corporate or individual). I also wonder what is the rush with all of this and what sort of record-keeping will be kept of all this so that it there is adequate accountability of this expenditure. What is the rush?

Or rather, I know that the rush is simply politically-driven and that this is another way in which public funds are at risk for the wrong reasons. But for the entirely arbitrary deadline of the ‘world AI safety summit’ the PM wants to host in the UK in the Fall — preferably ahead of any general election, I would think — it is almost impossible to justify the change of gear between the ‘do nothing’ AI white paper and the ‘rush everything’ approach driving the Taskforce. I hope we will not end up in another set of enquiries and reports, such as those stemming from the PPE procurement scandal or the ventilator challenge, but it is hard to see how this can all be done in a legally compliant manner, and with the serenity. clarity of view and long-term thinking required of regulatory design. Even in the field of AI. Unavoidably, more to follow.

Response to the UK’s March 2023 White Paper "A pro-innovation approach to AI regulation"

Together with colleagues at the Centre for Global Law and Innovation of the University of Bristol Law School, I submitted a response to the UK Government’s public consultation on its ‘pro-innovation’ approach to AI regulation. For an earlier assessment, see here.

The full submission is available at https://ssrn.com/abstract=4477368, and this is the executive summary:

The white paper ‘A pro-innovation approach to AI regulation’ (the ‘AI WP’) claims to advance a ‘pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative’ model that leverages the capabilities and skills of existing regulators to foster AI innovation. This model, we are told, would be underpinned by a set of principles providing a clear, unified, and flexible framework improving upon the current ‘complex patchwork of legal requirements’ and striking ‘the right balance between responding to risks and maximising opportunities.’

In this submission, we challenge such claims in the AI WP. We argue that:

  • The AI WP does not advance a balanced and proportionate approach to AI regulation, but rather, an “innovation first” approach that caters to industry and sidelines the public. The AI WP primarily serves a digital industrial policy goal ‘to make the UK one of the top places in the world to build foundational AI companies’. The public interest is downgraded and building public trust is approached instrumentally as a mechanism to promote AI uptake. Such an approach risks breaching the UK’s international obligations to create a legal framework that effectively protects fundamental rights in the face of AI risks. Additionally, in the context of public administration, poorly regulated AI could breach due process rules, putting public funds at risk.

  • The AI WP does not embrace an agile regulatory approach, but active deregulation. The AI WP stresses that the UK ‘must act quickly to remove existing barriers to innovation’ without explaining how any of the existing safeguards are no longer required in view of identified heightened AI risks. Coupled with the “innovation first” mandate, this deregulatory approach risks eroding regulatory independence and the effectiveness of the regulatory regimes the AI WP claims to seek to leverage. A more nuanced regulatory approach that builds on, rather than threatens, regulatory independence is required.

  • The AI WP builds on shaky foundations, including the absence of a mapping of current regulatory remits and powers. This makes it near impossible to assess the effectiveness and comprehensiveness of the proposed approach, although there are clear indications that regulatory gaps will remain. The AI WP also presumes continuity in the legal framework, which ignores reforms currently promoted by Government and further reforms of the overarching legal regime repeatedly floated. It seems clear that some regulatory regimes will soon see their scope or stringency limited. The AI WP does not provide clear mechanisms to address these issues, which undermine its core claim that leveraging existing regulatory regimes suffices to address potential AI harms. This is perhaps particularly evident in the context of AI use for policing, which is affected by both the existence of regulatory gaps and limitations in existing legal safeguards.

  • The AI WP does not describe a full, workable regulatory model. Lack of detail on the institutional design to support the central function is a crucial omission. Crucial tasks are assigned to such central function without clarifying its institutional embedding, resourcing, accountability mechanisms, etc.

  • The AI WP foresees a government-dominated approach that further risks eroding regulatory independence, in particular given the “innovation first” criteria to be used in assessing the effectiveness of the proposed regime.

  • The principles-based approach to AI regulation suggested in the AI WP is undeliverable due to lack of detail on the meaning and regulatory implications of the principles, barriers to translation into enforceable requirements, and tensions with existing regulatory frameworks. The minimalistic legislative intervention entertained in the AI WP would not equip regulators to effectively enforce the general principles. Following the AI WP would also result in regulatory fragmentation and uncertainty and not resolve the identified problem of a ‘complex patchwork of legal requirements’.

  • The AI WP does not provide any route towards sufficiently addressing the digital capabilities gap, or towards mitigating new risks to capabilities, such as deskilling—which create significant constraints on the likely effectiveness of the proposed approach.

Full citation: A Charlesworth, K Fotheringham, C Gavaghan, A Sanchez-Graells and C Torrible, ‘Response to the UK’s March 2023 White Paper "A pro-innovation approach to AI regulation"’ (June 19, 2023). Available at SSRN: https://ssrn.com/abstract=4477368.

The challenges of researching digital technology regulation -- some quick thoughts (or a rant)

Keeping up with developments in digital technologies and their regulation is exhausting.

Whenever a technology becomes mainstream (looking at you, ChatGPT, but also looking at blockchain in the rear mirror, and web 2.0 slightly behind… etc) there is a (seemingly) steep learning curve for researchers interested in regulation to climb — sometimes to find little novelty in the regulatory challenges they pose.

It recently seems like those curves are coming closer and closer together, whichever route one takes to exploring tech regulation.

Yet, it is not only that debates and regulatory interventions shift rather quickly, but also that these are issues of such social importance that the (academic) literature around them has exploded. Any automated search will trigger daily alerts to new pieces of scholarship and analysis (of mixed quality and relevance). Not to mention news items, policy reports, etc. Sifting through them beyond a cursory look at the abstracts is a job in itself …

These elements of ‘moving tech targets’ and ‘exponentially available analysis’ make researching these areas rather challenging. And sometimes I wonder if it is even possible to do it (well).

Perhaps I am just a bit overwhelmed.

I am in the process of finalising the explanation of the methodology I have used for my monograph on procurement and digital technologies—as it is clear that I cannot simply get away with stating that it is a ‘technology-centred interdisciplinary legal method’ (which it is, though).

Chatting to colleagues about it (and with the UK’s REF-fuelled obsession for ‘4* rigour’ in the background), the question keeps coming up about how does one make sure to cover the relevant field, how is anyone’s choice of sources not *ahem* capricious or random? In other words, the question keeps coming up: how do you make sure you have not missed anything important? (I’ll try to sleep tonight, cheers!).

I am not sure I have a persuasive, good answer. I am also not sure that ‘comprehensiveness’ is a reasonable expectation of a well done literature review or piece of academic analysis (any more). If it is, barring automated and highly formalised approaches to ‘scoping the field’, I fear we may quickly presume there is no possible method in the madness. But that does not sit right with me. And I also do not think it is a case of throwing the ‘qualitative research’ label as defence, as it means something different (and rigorous).

The challenge of expressing (and implementing) a defensible legal method in the face of such ‘moving tech targets’ and ‘exponentially available analysis’ is not minor.

And, on the other side of the coin, there is a lurking worry that whichever output results from this research will be lost in such ocean of (electronic) academic papers and books —for, if everyone is struggling to sift through the materials and has ever growing (Russian-doll-style) ‘to read’ folders as I do, will eyes ever be set on the research?

Perhaps method does not matter that much after all? (Not comforting, I know!).

Rant over.

"Can Procurement Be Used to Effectively Regulate AI?" [recording]

The recording and slides for yesterday’s webinar on ‘Can Procurement Be Used to Effectively Regulate AI?’ co-hosted by the University of Bristol Law School and the GW Law Government Procurement Programme are now available for catch up if you missed it.

I would like to thank once again Dean Jessica Tillipman (GW Law), Dr Aris Georgopoulos (Nottingham), Elizabeth "Liz" Chirico (Acquisition Innovation Lead at Office of the Deputy Assistant Secretary of the Army - Procurement) and Scott Simpson (Digital Transformation Lead, Department of Homeland Security Office of the Chief Procurement Officer - Procurement Innovation Lab) for really interesting discussion, and to all participants for their questions. Comments most welcome, as always.

Testing the limits of ChatGPT’s procurement knowledge (and stubbornness) – guest post by Džeina Gaile

Following up on the discussion whether public sector use of ChatGPT should be banned, in this post, Džeina Gaile* shares an interesting (and at points unnerving) personal experiment with the tool. Džeina asked a few easy questions on the topic of her PhD research (tender clarifications).

The answers – and the ‘hallucinations’, that is, the glaring mistakes – and the tone are worth paying attention to. I find the bit of the conversation on the very existence of Article 56 and the content of Article 56(3) Directive 2014/24/EU particularly (ahem) illuminating. Happy reading!

PS. If you take Džeina up on her provocation and run your own procurement experiment on ChatGPT (or equivalent), I will be delighted to publish it here as well.

Liar, liar, pants on fire – what ChatGPT did not teach me
about my own PhD research topic

 DISCLAIMER: The views provided here are just a result of an experiment by some random procurement expert that is not a specialist in IT law or any other AI-related law field.

If we consider law as a form of art, as lawyers, words are our main instrument. Therefore, we have a special respect for language as well as the facts that our words represent. We know the liability that comes with the use of the wrong words. One problem with ChatGPT is - it doesn't. 

This brings us to an experiment that could be performed by anyone having at least basic knowledge of the internet and some in-depth knowledge in some specific field, or at least an idea of the information that could be tested on the web. What can you do? Ask ChatGPT (or equivalent) some questions you already know the answers to. It would be nice if the (expected) answers include some facts, numbers, or people you can find on Google. Just remember to double-check everything. And see how it goes.

My experiment was performed on May 3rd, 4th and 17th, 2023, mostly in the midst of yet another evening spent trying to do something PhD related. (As you may know, the status of student upgrades your procrastination skills to a level you never even knew before, despite your age. That is how this article came about).

I asked ChatGPT a few questions on my research topic for fun and possible insights. At the end of this article, you can see quite long excerpts from our conversation, where you will find that maybe you can get the right information (after being very persuasive with your questions!), but not always, as in the case of the May 4th and 17th interactions. And you can get very many apologies during that (if you are into that).[1]

However, such a need for persuasion oughtn’t be necessary if the information is relatively easy to find, since, well, we all have used Google and it already knows how to find things. Also, you can call the answers given on May 4th and 17th misleading, or even pure lies. This, consequently, casts doubt on any information that is provided by this tool (at least, at this moment), if we follow the human logic that simpler things (such as finding the right article or paragraph in law) are easier done than complex things (such as giving an opinion on difficult legal issues). As can be seen from the chat, we don’t even know what ChatGPT’s true sources are and how it actually works when it tells you something that is not true (while still presenting it as a fact). 

Maybe some magic words like “as far as I know” or “prima facie” in the answers could have provided me with more empathy regarding my chatty friend. The total certainty with which the information is provided also gives further reasons for concern. What if I am a normal human being and don’t know the real answer, have forgotten or not noticed the disclaimer at the bottom of the chat (as it happens with the small letter texts), or don’t have any persistence to check the info? I may include the answers in my homework, essay, or even in my views on the issue at work—since, as you know, we are short of time and need everything done by yesterday. The path of least resistance is one of the most tempting. (And in the case of AI we should be aware of a thing inherent to humans called “anthropomorphizing”, i.e., attributing human form or personality to things not human, so we might trust something a bit more or more easily than we should.)

The reliability of the information provided by State institutions as well as lawyers has been one of the cornerstones of people’s belief in the justice system. Therefore, it could be concluded that either I had bad luck, or one should be very careful when introducing AI in state institutions. And such use should be limited only to cases where only information about facts is provided (with the possibility to see and check the resources) until the credibility of AI opinions could be reviewed and verified. At this moment you should believe the disclaimers of its creators and use AI resources with quite (legitimate) mistrust and treat it somewhat as a child that has done something wrong but will not admit it, no matter how long you interrogate them. And don’t take it for something it is not, even if it sounds like you should listen to it.**

May 3rd, 2023

[Reminder: Article 56(3) of the Directive 2014/24/EU: Where information or documentation to be submitted by economic operators is or appears to be incomplete or erroneous or where specific documents are missing, contracting authorities may, unless otherwise provided by the national law implementing this Directive, request the economic operators concerned to submit, supplement, clarify or complete the relevant information or documentation within an appropriate time limit, provided that such requests are made in full compliance with the principles of equal treatment and transparency.]

[...]

[… a quite lengthy discussion about the discretion of the contracting authority to ask for the information ...]

[The author did not get into a discussion about the opinion of ChatGPT on this issue, because that was not the aim of the chat, however, this could be done in some other conversation.]

[…]

[… long explanation ...]

[...]

May 4th, 2023

[Editor’s note: apologies that some of the screenshots appear in a small font…].

[…]

Both links that the ChatGPT gave are correct:

https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32014L0024

https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32014L0024&from=EN

However, both citations are wrong.

May 17th, 2023

[As you will see, ChatGPT doesn’t give links anymore, so it could have learned a bit within these few weeks].

[Editor’s note: apologies again that the remainder of the screenshots appear in a small font…].

[...]

[Not to be continued.]

DŽEINA GAILE

My name is Džeina Gaile and I am a doctoral student at the University of Latvia. My research focuses on clarification of a submitted tender, but I am interested in many aspects of public procurement. Therefore, I am supplementing my knowledge as often as I can and have a Master of Laws in Public Procurement Law and Policy with Distinction from the University of Nottingham. I also have been practicing procurement and am working as a lawyer for a contracting authority. In a few words, a bit of a “procurement geek”. In my free time, I enjoy walks with my dog, concerts, and social dancing.

________________

** This article was reviewed by Grammarly. Still, I hope it will not tell anything to the ChatGPT… [Editor’s note – the draft was then further reviewed by a human, yours truly].

[1] To be fair, I must stress that at the bottom of the chat page, there is a disclaimer: “Free Research Preview. ChatGPT may produce inaccurate information about people, places, or facts. ChatGPT May 3 Version” or “Free Research Preview. ChatGPT may produce inaccurate information about people, places, or facts. ChatGPT May 12 Version” later. And, when you join the tool, there are several announcements that this is a work in progress.


ChatGPT in the Public Sector -- should it be banned?

In ‘ChatGPT in the Public Sector – overhyped or overlooked?’ (24 Apr 2023), the Analysis and Research Team (ART) of the General Secretariat of the Council of the European Union provides a useful and accessible explanation of how ChatGPT works, as well interesting analysis of the risks and pitfalls of rushing to embed generative artificial intelligence (GenAI), and large language models (LLMs) in particular, in the functioning of the public administration.

The analysis stresses the risks stemming from ‘inaccurate, biased, or nonsensical’ GenAI outputs and, in particular, that ‘the key principles of public administration such as accountability, transparency, impartiality, or reliability need to be considered thoroughly in the [GenAI] integration process’.

The paper provides a helpful introduction to how LLMs work and their technical limitations. It then maps potential uses in the public administration, assesses the potential impact of their use on the European principles of public sector administration, and then suggests some measures to mitigate the relevant risks.

This analysis is helpful but, in my view, it is already captured by the presumption that LLMs are here to stay and that what regulators can do is just try to minimise their potential negative impacts—which implies accepting that there will remain unaddressed impacts. By referring to general principles of public administration, rather than eg the right to good administration under the EU Charter of Fundamental Rights, the analysis is also unnecessarily lenient.

I find this type of discourse dangerous and troubling because it facilitates the adoption of digital technologies that cannot meet current legal requirements and guarantees of individual rights. This is clear from the paper itself, although the implications of part of the analysis are not sufficiently explored, in my view.

The paper has a final section where it explicitly recognises that, while some risks might be mitigated by technological advancements, other risks are of a more structural nature and cannot be fully corrected despite best efforts. The paper then lists a very worrying panoply of such structural issues (at 16):

  • ‘This is the case for detecting and removing biases in training data and model outputs. Efforts to sanitize datasets can even worsen biases’.

  • ‘Related to biases is the risk of a perpetuation of the status quo. LLMs mirror the values, habits and attitudes that are present in their training data, which does not leave much space for changing or underrepresented societal views. Relying on LLMs that have been trained with previously produced documents in a public administration severely limits the scope for improvement and innovation and risks leaving the public sector even less flexible than it is already perceived to be’.

  • ‘The ‘black box’ issue, where AI models arrive at conclusions or decisions without revealing the process of how they were reached is also primarily structural’.

  • ‘Regulating new technologies will remain a cat-and-mouse game. Acceleration risk (the emergence of a race to deploy new AI as quickly as possible at the expense of safety standards) is also an area of concern’.

  • ‘Finally […] a major structural risk lies in overreliance, which may be bolstered by rapid technological advances. This could lead to a lack of critical thinking skills needed to adequately assess and oversee the model’s output, especially amongst a younger generation entering a workforce where such models are already being used’.

In my view, beyond the paper’s suggestion that the way forward is to maintain human involvement to monitor the way LLMs (mal)function in the public sector, we should be discussing the imposition of a ban on the adoption of LLMs (and other digital technologies) by the public sector unless it can be positively proven that their deployment will not affect individual rights and more diffuse public interests, and that any residual risks are adequately mitigated.

The current state of affairs is unacceptable in that the lack of regulation allows for a quickly accelerating accumulation of digital deployments that generate risks to social and individual rights and goods. The need to reverse this situation underlies my proposal to permission the adoption of digital technologies by the public sector. Unless we take a robust approach to slowing down and carefully considering the implications of public sector digitalisation, we may be undermining public governance in ways that will be very difficult or impossible to undo. It is not too late, but it may be soon.

Source: https://www.thetimes.co.uk/article/how-we-...

Free registration open for two events on procurement and artificial intelligence

Registration is now open for two free events on procurement and artificial intelligence (AI).

First, a webinar where I will be participating in discussions on the role of procurement in contributing to the public sector’s acquisition of trustworthy AI, and the associated challenges, from an EU and US perspective.

Second, a public lecture where I will present the findings of my research project on digital technologies and public procurement.

Please scroll down for details and links to registration pages. All welcome!

1. ‘Can Procurement Be Used to Effectively Regulate AI?’ | Free online webinar
30 May 2023 2pm BST / 3pm CET-SAST / 9am EST (90 mins)
Co-organised by University of Bristol Law School and George Washington University Law School.

Artificial Intelligence (“AI”) regulation and governance is a global challenge that is starting to generate different responses in the EU, US, and other jurisdictions. Such responses are, however, rather tentative and politically contested. A full regulatory system will take time to crystallise and be fully operational. In the meantime, despite this regulatory gap, the public sector is quickly adopting AI solutions for a wide range of activities and public services.

This process of accelerated AI adoption by the public sector places procurement as the (involuntary) gatekeeper, tasked with ‘AI regulation by contract’, at least for now. The procurement function is expected to design tender procedures and contracts capable of attaining goals of AI regulation (such as trustworthiness, explainability, or compliance with data protection and human and fundamental rights) that are so far eluding more general regulation.

This webinar will provide an opportunity to take a hard look at the likely effectiveness of AI regulation by contract through procurement and its implications for the commercialisation of public governance, focusing on key issues such as:

  • The interaction between tender design, technical standards, and negotiations.

  • The challenges of designing, monitoring, and enforcing contractual clauses capable of delivering effective ‘regulation by contract’ in the AI space.

  • The tension between the commercial value of tailored contractual design and the regulatory value of default clauses and standard terms.

  • The role of procurement disputes and litigation in shaping AI regulation by contract.

  • The alternative regulatory option of establishing mandatory prior approval by an independent regulator of projects involving AI adoption by the public sector.

This webinar will be of interest to those working on or researching the digitalisation of the public sector and AI regulation in general, as the discussion around procurement gatekeeping mirrors the main issues arising from broader trends.

I will have the great opportunity of discussing my research with Aris Georgopoulos (Nottingham), Scott Simpson (Digital Transformation Lead at U.S. Department of Homeland Security), and Liz Chirico (Acquisition Innovation Lead at Office of the Deputy Assistant Secretary of the Army). Jessica Tillipman (GW Law) will moderate the discussion and Q&A.

Registration: https://law-gwu-edu.zoom.us/webinar/register/WN_w_V9s_liSiKrLX9N-krrWQ.

2. ‘AI in the public sector: can procurement promote trustworthy AI and avoid commercial capture?’ | Free in-person public lecture
4 July 2023 2pm BST, Reception Room, Wills Memorial Building, University of Bristol
Organised by University of Bristol Law School, Centre for Global Law and Innovation

The public sector is quickly adopting artificial intelligence (AI) to manage its interactions with citizens and in the provision of public services – for example, using chatbots in official websites, automated processes and call-centres, or predictive algorithms.

There are inherent high stakes risks to this process of public governance digitalisation, such as bias and discrimination, unethical deployment, data and privacy risks, cyber security risks, or risks of technological debt and dependency on proprietary solutions developed by (big) tech companies.

However, as part of the UK Government’s ‘light touch’ ‘pro-innovation’ approach to digital technology regulation, the adoption of AI in the public sector remains largely unregulated. 

In this public lecture, I will present the findings of my research funded by the British Academy, analysing how, in this deregulatory context, the existing rules on public procurement fall short of protecting the public interest.

An alternative approach is required to create mechanisms of external independent oversight and mandatory standards to embed trustworthy AI requirements and to mitigate against commercial capture in the acquisition of AI solutions. 

Registration: https://www.eventbrite.co.uk/e/can-procurement-promote-trustworthy-ai-and-avoid-commercial-capture-tickets-601212712407.

External oversight and mandatory requirements for public sector digital technology adoption

© Mateo Mulder-Graells (2023).

I thought the time would never come, but the last piece of my book project puzzle is now more or less in place. After finding that procurement is not the right regulatory actor and does not have the best tools of ‘digital regulation by contract’, in this last draft chapter, I explore how to discharge procurement of the assigned digital regulation role to increase the likelihood of effective enforcement of desirable goals of public sector digital regulation.

I argue that this should be done through two inter-related regulatory interventions consisting of developing (1) a regulator tasked with the external oversight of the adoption of digital technologies by the public sector, as well as (2) a suite of mandatory requirements binding both public entities seeking to adopt digital technologies and technology providers, and both in relation to the digital technologies to be adopted by the public sector and the applicable governance framework.

Detailed analysis of these issues would require much more extensive treatment than this draft chapter can offer. The modest goal here is simply to stress the key attributes and functions that each of these two regulatory interventions should have to make a positive contribution to governing the transition towards a new model of public digital governance. In this blog post, I summarise the main arguments.

As ever, I would be most grateful for feedback: a.sanchez-graells@bristol.ac.uk. Especially as I will now turn my attention to seeing how the different pieces of the puzzle fit together, while I edit the manuscript for submission before end of July 2023.

Institutional deficit and risk of capture

In the absence of an alternative institutional architecture (or while it is put in place), procurement is expected to develop a regulatory gatekeeping role in relation to the adoption of digital technologies by the public sector, which is in turn expected to have norm-setting and market-shaping effects across the economy. This could be seen as a way of bypassing or postponing decisions on regulatory architecture.

However, earlier analysis has shown that the procurement function is not the right institution to which to assign a digital regulation role, as it cannot effectively discharge such a duty. This highlights the existence of an institutional deficit in the process of public sector digitalisation, as well as in relation to digital technology regulation more broadly. An alternative approach to institutional design is required, and it can be delivered through the creation of a notional ‘AI in Public Sector Authority’ (AIPSA).

Earlier analysis has also shown that there are pervasive risks of regulatory capture and commercial determination of the process of public sector digitalisation stemming from reliance on standards and benchmarks created by technology vendors or by bodies heavily influenced by the tech industry. AIPSA could safeguard against such risk through controls over the process of standard adoption. AIPSA could also guard against excessive experimentation with digital technologies by creating robust controls to counteract their policy irresistibility.

Overcoming the institutional deficit through AIPSA

The adoption of digital technologies in the process of public sector digitalisation creates regulatory challenges that require external oversight, as procurement is unable to effectively regulate this process. A particularly relevant issue concerns whether such oversight should be entrusted to a new regulator (broad approach), or whether it would suffice to assign new regulatory tasks to existing regulators (narrow approach).

I submit that the narrow approach is inadequate because it perpetuates regulatory fragmentation and can lead to undesirable spillovers or knock-on effects, whether the new regulatory tasks are assigned to data protection authorities, (quasi)regulators with a ‘sufficiently close’ regulatory remit in relation with information and communications technologies (ICT) (such as eg the Agency for Digital Italy (AgID), or the Dutch Advisory Council on IT assessment (AcICT)), or newly created centres of expertise in algorithmic regulation (eg the French PEReN). Such ‘organic’ or ‘incremental’ approach to institutional development could overshadow important design considerations, as well embed biases due to the institutional drivers of the existing (quasi)regulators.

To avoid these issues, I advocate a broader or more joined up approach in the proposal for AIPSA. AIPSA would be an independent authority with the statutory function of promoting overarching goals of digital regulation, and specifically tasked with regulating the adoption and use of digital technologies by the public sector, whether through in-house development or procurement from technology providers. AIPSA would also absorb regulatory functions in cognate areas, such as the governance of public sector data, and integrate work in areas such as cyber security. It would also serve a coordinating function with the data protection authority.

In the draft chapter, I stress three fundamental aspects of AIPSA’s institutional design: regulatory coherence, independence and expertise. Independence and expertise would be the two most crucial factors. AIPSA would need to be designed in a way that ensured both political and industry independence, with the issue of political independence having particular salience and requiring countervailing accountability mechanisms. Relatedly, the importance of digital capabilities to effectively exercise a digital regulation role cannot be overemphasised. It is not only important in relation to the active aspects of the regulatory role—such as control of standard setting or permissioning or licencing of digital technology use (below)—but also in relation to the passive aspects of the regulatory role and, in particular, in relation to reactive engagement with industry. High levels of digital capability would be essential to allow AIPSA to effectively scrutinise claims from those that sought to influence its operation and decision-making, as well as reduce AIPSA’s dependence on industry-provided information.

safeguard against regulatory capture and policy irresistibility

Regulating the adoption of digital technologies in the process of public sector digitalisation requires establishing the substantive requirements that such technology needs to meet, as well as the governance requirements need to ensure its proper use. AIPSA’s role in setting mandatory requirements for public sector digitalisation would be twofold.

First, through an approval or certification mechanism, it would control the process of standardisation to neutralise risks of regulatory capture and commercial determination. Where no standards were susceptible of approval or certification, AIPSA would develop them.

Second, through a permissioning or licencing process, AIPSA would ensure that decisions on the adoption of digital technologies by the public sector are not driven by ‘policy irresistibility’, that they are supported by clear governance structures and draw on sufficient resources, and that adherence to the goals of digital regulation is sustained throughout the implementation and use of digital technologies by the public sector and subject to proactive transparency requirements.

The draft chapter provides more details on both issues.

If not AIPSA … then clearly not procurement

There can be many objections to the proposals developed in this draft chapter, which would still require further development. However, most of the objections would likely also apply to the use of procurement as a tool of digital regulation. The functions expected of AIPSA closely match those expected of the procurement function under the approach to ‘digital regulation by contract’. Challenges to AIPSA’s ability to discharge such functions would be applicable to any public buyer seeking to achieve the same goals. Similarly, challenges to the independence or need for accountability of AIPSA would be similarly applicable to atomised decision-making by public buyers.

While the proposal is necessarily imperfect, I submit that it would improve upon the emerging status quo and that, in discharging procurement of the digital regulation role, it would make a positive contribution to the governance of the transition to a new model of digital public governance.

The draft chapter is available via SSRN: Albert Sanchez-Graells, ‘Discharging procurement of the digital regulation role: external oversight and mandatory requirements for public sector digital technology adoption’.

Procuring AI without understanding it. Way to go?

The UK’s Digital Regulation Cooperation Forum (DRCF) has published a report on Transparency in the procurement of algorithmic systems (for short, the ‘AI procurement report’). Some of DRCF’s findings in the AI procurement report are astonishing, and should attract significant attention. The one finding that should definitely not go unnoticed is that, according to DRCF, ‘Buyers can lack the technical expertise to effectively scrutinise the [algorithmic systems] they are procuring, whilst vendors may limit the information they share with buyers’ (at 9). While this is not surprising, the ‘normality’ with which this finding is reported evidences the simple fact that, at least in the UK, it is accepted that the AI field is dominated by technology providers, that all institutional buyers are ‘AI consumers’, and that regulators do not seem to see a need to intervene to rebalance the situation.

The report is not specifically about public procurement of AI, but its content is relevant to assessing the conditions surrounding the acquisition of AI by the public sector. First, the report covers algorithmic systems other than AI—that is, automation based on simpler statistical techniques—but the issues it raises can only be more acute in relation to AI than in relation to simpler algorithmic systems (as the report itself highlights, at 9). Second, the report does not make explicit whether the mix of buyers from which it draws evidence includes public as well as private buyers. However, given the public sector’s digital skills gap, there is no reason to believe that the limited knowledge and asymmetries of information documented in the AI procurement report are less acute for public buyers than private buyers.

Moreover, the AI procurement report goes as far as to suggest that public sector procurement is somewhat in a better position than private sector procurement of AI because there are multiple guidelines focusing on public procurement (notably, the Guidelines for AI procurement). Given the shortcomings in those guidelines (see here for earlier analysis), this can hardly provide any comfort.

The AI procurement report evidences that UK (public and private) buyers are procuring AI they do not understand and cannot adequately monitor. This is extremely worrying. The AI procurement report presents evidence gathered by DRCF in two workshops with 23 vendors and buyers of algorithmic systems in Autumn 2022. The evidence base is qualitative and draws from a limited sample, so it may need to be approached with caution. However, its findings are sufficiently worrying as to require a much more robust policy intervention that the proposals in the recently released White Paper ‘AI regulation: a pro-innovation approach’ (for discussion, see here). In this blog post, I summarise the findings of the AI procurement report I find more problematic and link this evidence to the failing attempt at using public procurement to regulate the acquisition of AI by the public sector in the UK.

Misinformed buyers with limited knowledge and no ability to oversee

In its report, DRCF stresses that ‘some buyers lacked understanding of [algorithmic systems] and could struggle to recognise where an algorithmic process had been integrated into a system they were procuring’, and that ‘[t]his issue may be compounded where vendors fail to note that a solution includes AI or its subset, [machine learning]’ (at 9). The report goes on to stress that ‘[w]here buyers have insufficient information about the development or testing of an [algorithmic system], there is a risk that buyers could be deploying an [algorithmic system] that is unlawful or unethical. This risk is particularly acute for high-risk applications of [algorithmic systems], for example where an [algorithmic system] determines a person's access to employment or housing or where the application is in a highly regulated sector such as finance’ (at 10). Needless to say, however, this applies to a much larger set of public sector areas of activity, and the problems are not limited to high-risk applications involving individual rights, but also to those that involve high stakes from a public governance perspective.

Similarly, DRCF stresses that while ‘vendors use a range of performance metrics and testing methods … without appropriate technical expertise or scrutiny, these metrics may give buyers an incomplete picture of the effectiveness of an [algorithmic system]’; ‘vendors [can] share performance metrics that overstate the effectiveness of their [algorithmic system], whilst omitting other metrics which indicate lower effectiveness in other areas. Some vendors raised concerns that their competitors choose the most favourable (i.e., the highest) performance metric to win procurement contracts‘, while ‘not all buyers may have the technical knowledge to understand which performance metrics are most relevant to their procurement decision’ (at 10). This demolishes any hope that buyers facing this type of knowledge gap and asymmetry of information can compare algorithmic systems in a meaningful way.

The issue is further compounded by the lack of standards and metrics. The report stresses this issue: ‘common or standard metrics do not yet exist within industry for the evaluation of [algorithmic systems]. For vendors, this can make it more challenging to provide useful information, and for buyers, this lack of consistency can make it difficult to compare different [algorithmic systems]. Buyers also told us that they would find more detail on the performance of the [algorithmic system] being procured helpful - including across a range of metrics. The development of more consistent performance metrics could also help regulators to better understand how accurate an [algorithmic system] is in a specific context’ (at 11).

Finally, the report also stresses that vendors have every incentive to withhold information from buyers, both because ‘sharing too much technical detail or knowledge could allow buyers to re-develop their product’ and because ‘they remain concerned about revealing commercially sensitive information to buyers’ (at 10). In that context, given the limited knowledge and understanding documented above, it can even be difficult for a buyer to ascertain which information it has not been given.

The DRCF AI procurement report then focuses on mechanisms that could alleviate some of the issues it identifies, such as standardisation, certification and audit mechanisms, as well as AI transparency registers. However, these mechanisms raise significant questions, not only in relation to their practical implementation, but also regarding the continued reliance on the AI industry (and thus, AI vendors) for the development of some of its foundational elements—and crucially, standards and metrics. To a large extent, the AI industry would be setting the benchmark against which their processes, practices and performance is to be measured. Even if a third party is to carry out such benchmarking or compliance analysis in the context of AI audits, the cards can already be stacked against buyers.

Not the way forward for the public sector (in the UK)

The DRCF AI procurement report should give pause to anyone hoping that (public) buyers can drive the process of development and adoption of these technologies. The AI procurement report clearly evidences that buyers with knowledge disadvantages and information asymmetries are at the merci of technology providers—and/or third-party certifiers (in the future). The evidence in the report clearly suggests that this a process driven by technology providers and, more worryingly, that (most) buyers are in no position to critically assess and discipline vendor behaviour.

The question arises why would any buyer acquire and deploy a technology it does not understand and is in no position to adequately assess. But the hype and hard-selling surrounding AI, coupled with its abstract potential to generate significant administrative and operational advantages seem to be too hard to resist, both for private sector entities seeking to gain an edge (or at least not lag behind competitors) in their markets, and by public sector entities faced with AI’s policy irresistibility.

In the public procurement context, the insights from DRCF’s AI procurement report stress that the fundamental imbalance between buyers and vendors of digital technologies undermines the regulatory role that public procurement is expected to play. Only a buyer that had equal or superior technical ability and that managed to force full disclosure of the relevant information from the technology provider would be in a position to (try to) dictate the terms of the acquisition and deployment of the technology, including through the critical assessment and, if needed, modification of emerging technical standards that could well fall short of the public interest embedded in the process of public sector digitalisation—though it would face significant limitations.

This is an ideal to which most public buyers cannot aspire. In fact, in the UK, the position is the reverse and the current approach is to try to facilitate experimentation with digital technologies for public buyers with no knowledge or digital capability whatsoever—see the Crown Commercial Service’s Artificial Intelligence Dynamic Purchasing System (CCS AI DPS), explicitly targeting inexperienced and digitally novice, to put it politely, public buyers by stressing that ‘If you are new to AI you will be able to procure services through a discovery phase, to get an understanding of AI and how it can benefit your organisation’.

Given the evidence in the DRCF AI report, this approach can only inflate the number of public sector buyers at the merci of technology providers. Especially because, while the CCS AI DPS tries to address some issues, such as ethical risks (though the effectiveness of this can also be queried), it makes clear that ‘quality, price and cultural fit (including social value) can be assessed based on individual customer requirements’. With ‘AI quality’ capturing all the problematic issues mentioned above (and, notably, AI performance), the CCS AI DPS is highly problematic.

If nothing else, the DRCF AI procurement report gives further credence to the need to change regulatory tack. Most importantly, the report evidences that there is a very real risk that public sector entities are currently buying AI they do not understand and are in no position to effectively control post-deployment. This risk needs to be addressed if the UK public is to trust the accelerating process of public sector digitalisation. As formulated elsewhere, this calls for a series of policy and regulatory interventions.

Ensuring that the adoption of AI in the public sector operates in the public interest and for the benefit of all citizens requires new legislation supported by a new mechanism of external oversight and enforcement. New legislation is required to impose specific minimum requirements of eg data governance and algorithmic impact assessment and related transparency across the public sector, to address the issue of lack of standards and metrics but without reliance on their development by and within the AI industry. Primary legislation would need to be developed by statutory guidance of a much more detailed and actionable nature than eg the current Guidelines for AI procurement. These developed requirements can then be embedded into public contracts by reference, and thus protect public buyers from vendor standard cherry-picking, as well as providing a clear benchmark against which to assess tenders.

Legislation would also be necessary to create an independent authority—eg an ‘AI in the Public Sector Authority’ (AIPSA)—with powers to enforce those minimum requirements across the public sector. AIPSA is necessary, as oversight of the use of AI in the public sector does not currently fall within the scope of any specific sectoral regulator and the general regulators (such as the Information Commissioner’s Office) lack procurement-specific knowledge. Moreover, units within Cabinet Office (such as the Office for AI or the Central Digital and Data Office) lack the required independence. The primary role of AIPSA would be to constrain the process of adoption of AI by the public sector, especially where the public buyer lacks digital capacity and is thus at risk of capture or overpowering by technological vendors.

In that regard, and until sufficient in-house capability is built to ensure adequate understanding of the technologies being procured (especially in the case of complex AI), and adequate ability to manage digital procurement governance requirements independently, AIPSA would have to approve all projects to develop, procure and deploy AI in the public sector to ensure that they meet the required legislative safeguards in terms of data governance, impact assessment, etc. This approach could progressively be relaxed through eg block exemption mechanisms, once there is sufficiently detailed understanding and guidance on specific AI use cases, and/or in relation to public sector entities that could demonstrate sufficient in-house capability, eg through a mechanism of independent certification in accordance with benchmarks set by AIPSA, or certification by AIPSA itself.

In parallel, it would also be necessary for the Government to develop a clear and sustainably funded strategy to build in-house capability in the public sector, including clear policies on the minimisation of expenditure directed at the engagement of external consultants and the development of guidance on how to ensure the capture and retention of the knowledge developed within outsourced projects (including, but not only, through detailed technical documentation).

None of this features in the recently released White Paper ‘AI regulation: a pro-innovation approach’. However, DRCF’s AI procurement report further evidences that these policy interventions are necessary. Else, the UK will be a jurisdiction where the public sector acquires and deploys technology it does not understand and cannot control. Surely, this is not the way to go.

UK's 'pro-innovation approach' to AI regulation won't do, particularly for public sector digitalisation

Regulating artificial intelligence (AI) has become the challenge of the time. This is a crucial area of regulatory development and there are increasing calls—including from those driving the development of AI—for robust regulatory and governance systems. In this context, more details have now emerged on the UK’s approach to AI regulation.

Swimming against the tide, and seeking to diverge from the EU’s regulatory agenda and the EU AI Act, the UK announced a light-touch ‘pro-innovation approach’ in its July 2022 AI regulation policy paper. In March 2023, the same approach was supported by a Report of the Government Chief Scientific Adviser (the ‘GCSA Report’), and is now further developed in the White Paper ‘AI regulation: a pro-innovation approach’ (the ‘AI WP’). The UK Government has launched a public consultation that will run until 21 June 2023.

Given the relevance of the issue, it can be expected that the public consultation will attract a large volume of submissions, and that the ‘pro-innovation approach’ will be heavily criticised. Indeed, there is an on-going preparatory Parliamentary Inquiry on the Governance of AI that has already collected a wealth of evidence exploring the pros and cons of the regulatory approach outlined there. Moreover, initial reactions eg by the Public Law Project, the Ada Lovelace Institute, or the Royal Statistical Society have been (to different degrees) critical of the lack of regulatory ambition in the AI WP—while, as could be expected, think tanks closely linked to the development of the policy, such as the Alan Turing Institute, have expressed more positive views.

Whether the regulatory approach will shift as a result of the expected pushback is unclear. However, given that the AI WP follows the same deregulatory approach first suggested in 2018 and is strongly politically/policy entrenched—for the UK Government has self-assessed this approach as ‘world leading’ and claims it will ‘turbocharge economic growth’—it is doubtful that much will necessarily change as a result of the public consultation.

That does not mean we should not engage with the public consultation, but the opposite. In the face of the UK Government’s dereliction of duty, or lack of ideas, it is more important than ever that there is a robust pushback against the deregulatory approach being pursued. Especially in the context of public sector digitalisation and the adoption of AI by the public administration and in the provision of public services, where the Government (unsurprisingly) is unwilling to create regulatory safeguards to protect citizens from its own action.

In this blogpost, I sketch my main areas of concern with the ‘pro-innovation approach’ in the GCSA Report and AI WP, which I will further develop for submission to the public consultation, building on earlier views. Feedback and comments would be gratefully received: a.sanchez-graells@bristol.ac.uk.

The ‘pro-innovation approach’ in the GCSA Report — squaring the circle?

In addition to proposals on the intellectual property (IP) regulation of generative AI, the opening up of public sector data, transport-related, or cyber security interventions, the GCSA Report focuses on ‘core’ regulatory and governance issues. The report stresses that regulatory fragmentation is one of the key challenges, as is the difficulty for the public sector in ‘attracting and retaining individuals with relevant skills and talent in a competitive environment with the private sector, especially those with expertise in AI, data analytics, and responsible data governance‘ (at 5). The report also further hints at the need to boost public sector digital capabilities by stressing that ‘the government and regulators should rapidly build capability and know-how to enable them to positively shape regulatory frameworks at the right time‘ (at 13).

Although the rationale is not very clearly stated, to bridge regulatory fragmentation and facilitate the pooling of digital capabilities from across existing regulators, the report makes a central proposal to create a multi-regulator AI sandbox (at 6-8). The report suggests that it could be convened by the Digital Regulatory Cooperation Forum (DRCF)—which brings together four key regulators (the Information Commissioner’s Office (ICO), Office of Communications (Ofcom), the Competition and Markets Authority (CMA) and the Financial Conduct Authority (FCA))—and that DRCF should look at ways of ‘bringing in other relevant regulators to encourage join up’ (at 7).

The report recommends that the AI sandbox should operate on the basis of a ‘commitment from the participant regulators to make joined-up decisions on regulations or licences at the end of each sandbox process and a clear feedback loop to inform the design or reform of regulatory frameworks based on the insights gathered. Regulators should also collaborate with standards bodies to consider where standards could act as an alternative or underpin outcome-focused regulation’ (at 7).

Therefore, the AI sandbox would not only be multi-regulator, but also encompass (in some way) standard-setting bodies (presumably UK ones only, though), without issues of public-private interaction in decision-making implying the exercise of regulatory public powers, or issues around regulatory capture and risks of commercial determination, being considered at all. The report in general is extremely industry-orientated, eg in stressing in relation to the overarching pacing problem that ‘for emerging digital technologies, the industry view is clear: there is a greater risk from regulating too early’ (at 5), without this being in any way balanced with clear (non-industry) views that the biggest risk is actually in regulating too late and that we are collectively frog-boiling into a ‘runaway AI’ fiasco.

Moreover, confusingly, despite the fact that the sandbox would be hosted by DRCF (of which the ICO is a leading member), the GCSA Report indicates that the AI sandbox ‘could link closely with the ICO sandbox on personal data applications’ (at 8). The fact that the report is itself unclear as to whether eg AI applications with data protection implications should be subjected to one or two sandboxes, or the extent to which the general AI sandbox would need to be integrated with sectoral sandboxes for non-AI regulatory experimentation, already indicates the complexity and dubious practical viability of the suggested approach.

It is also unclear why multiple sector regulators should be involved in any given iteration of a single AI sandbox where there may be no projects within their regulatory remit and expertise. The alternative approach of having an open or rolling AI sandbox mechanism led by a single AI authority, which would then draw expertise and work in collaboration with the relevant sector regulator as appropriate on a per-project basis, seems preferable. While some DRCF members could be expected to have to participate in a majority of sandbox projects (eg CMA and ICO), others would probably have a much less constant presence (eg Ofcom, or certainly the FCA).

Remarkably, despite this recognition of the functional need for a centralised regulatory approach and a single point of contact (primarily for industry’s convenience), the GCSA Report implicitly supports the 2022 AI regulation policy paper’s approach to not creating an overarching cross-sectoral AI regulator. The GCSA Report tries to create a ‘non-institutionalised centralised regulatory function’, nested under DRCF. In practice, however, implementing the recommendation for a single AI sandbox would create the need for the further development of the governance structures of the DRCF (especially if it was to grow by including many other sectoral regulators), or whichever institution ‘hosted it’, or else risk creating a non-institutional AI regulator with the related difficulties in ensuring accountability. This would add a layer of deregulation to the deregulatory effect that the sandbox itself creates (see eg Ranchordas (2021)).

The GCSA Report seems to try to square the circle of regulatory fragmentation by relying on cooperation as a centralising regulatory device, but it does this solely for the industry’s benefit and convenience, without paying any consideration to the future effectiveness of the regulatory framework. This is hard to understand, given the report’s identification of conflicting regulatory constraints, or in its terminology ‘incentives’: ‘The rewards for regulators to take risks and authorise new and innovative products and applications are not clear-cut, and regulators report that they can struggle to trade off the different objectives covered by their mandates. This can include delivery against safety, competition objectives, or consumer and environmental protection, and can lead to regulator behaviour and decisions that prioritise further minimising risk over supporting innovation and investment. There needs to be an appropriate balance between the assessment of risk and benefit’ (at 5).

This not only frames risk-minimisation as a negative regulatory outcome (and further feeds into the narrative that precautionary regulatory approaches are somehow not legitimate because they run against industry goals—which deserves strong pushback, see eg Kaminski (2022)), but also shows a main gap in the report’s proposal for the single AI sandbox. If each regulator has conflicting constraints, what evidence (if any) is there that collaborative decision-making will reduce, rather than exacerbate, such regulatory clashes? Are decisions meant to be arrived at by majority voting or in any other way expected to deactivate (some or most) regulatory requirements in view of (perceived) gains in relation to other regulatory goals? Why has there been no consideration of eg the problems encountered by concurrency mechanisms in the application of sectoral and competition rules (see eg Dunne (2014), (2020) and (2021)), as an obvious and immediate precedent of the same type of regulatory coordination problems?

The GCSA report also seems to assume that collaboration through the AI sandbox would be resource neutral for participating regulators, whereas it seems reasonable to presume that this additional layer of regulation (even if not institutionalised) would require further resources. And, in any case, there does not seem to be much consideration as to the viability of asking of resource-strapped regulators to create an AI sandbox where they can (easily) be out-skilled and over-powered by industry participants.

In my view, the GCSA Report already points at significant weaknesses in the resistance to creating any new authorities, despite the obvious functional need for centralised regulation, which is one of the main weaknesses, or the single biggest weakness, in the AI WP—as well as in relation to a lack of strategic planning around public sector digital capabilities, despite well-recognised challenges (see eg Committee of Public Accounts (2021)).

The ‘pro-innovation approach’ in the AI WP — a regulatory blackhole, privatisation of ai regulation, or both

The AI WP envisages an ‘innovative approach to AI regulation [that] uses a principles-based framework for regulators to interpret and apply to AI within their remits’ (para 36). It expects the framework to ‘pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative’ (para 37). As will become clear, however, such ‘innovative approach’ solely amounts to the formulation of high-level, broad, open-textured and incommensurable principles to inform a soft law push to the development of regulatory practices aligned with such principles in a highly fragmented and incomplete regulatory landscape.

The regulatory framework would be built on four planks (para 38): [i] an AI definition (paras 39-42); [ii] a context-specific approach (ie a ‘used-based’ approach, rather than a ‘technology-led’ approach, see paras 45-47); [iii] a set of cross-sectoral principles to guide regulator responses to AI risks and opportunities (paras 48-54); and [iv] new central functions to support regulators to deliver the AI regulatory framework (paras 70-73). In reality, though, there will be only two ‘pillars’ of the regulatory framework and they do not involve any new institutions or rules. The AI WP vision thus largely seems to be that AI can be regulated in the UK in a world-leading manner without doing anything much at all.

AI Definition

The UK’s definition of AI will trigger substantive discussions, especially as it seeks to build it around ‘the two characteristics that generate the need for a bespoke regulatory response’: ‘adaptivity’ and ‘autonomy’ (para 39). Discussing the definitional issue is beyond the scope of this post but, on the specific identification of the ‘autonomy’ of AI, it is worth highlighting that this is an arguably flawed regulatory approach to AI (see Soh (2023)).

No new institutions

The AI WP makes clear that the UK Government has no plans to create any new AI regulator, either with a cross-sectoral (eg general AI authority) or sectoral remit (eg an ‘AI in the public sector authority’, as I advocate for). The Ministerial Foreword to the AI WP already stresses that ‘[t]o ensure our regulatory framework is effective, we will leverage the expertise of our world class regulators. They understand the risks in their sectors and are best placed to take a proportionate approach to regulating AI’ (at p2). The AI WP further stresses that ‘[c]reating a new AI-specific, cross-sector regulator would introduce complexity and confusion, undermining and likely conflicting with the work of our existing expert regulators’ (para 47). This however seems to presume that a new cross-sector AI regulator would be unable to coordinate with existing regulators, despite the institutional architecture of the regulatory framework foreseen in the AI WP entirely relying on inter-regulator collaboration (!).

No new rules

There will also not be new legislation underpinning regulatory activity, although the Government claims that the WP AI, ‘alongside empowering regulators to take a lead, [is] also setting expectations‘ (at p3). The AI WP claims to develop a regulatory framework underpinned by five principles to guide and inform the responsible development and use of AI in all sectors of the economy: [i] Safety, security and robustness; [ii] Appropriate transparency and explainability; [iii] Fairness; [iv] Accountability and governance; and [v] Contestability and redress (para 10). However, they will not be put on a statutory footing (initially); ‘the principles will be issued on a non-statutory basis and implemented by existing regulators’ (para 11). While there is some detail on the intended meaning of these principles (see para 52 and Annex A), the principles necessarily lack precision and, worse, there is a conflation of the principles with other (existing) regulatory requirements.

For example, it is surprising that the AI WP describes fairness as implying that ‘AI systems should (sic) not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes‘ (emphasis added), and stresses the expectation ‘that regulators’ interpretations of fairness will include consideration of compliance with relevant law and regulation’ (para 52). This encapsulates the risks that principles-based AI regulation ends up eroding compliance with and enforcement of current statutory obligations. A principle of AI fairness cannot modify or exclude existing legal obligations, and it should not risk doing so either.

Moreover, the AI WP suggests that, even if the principles are supported by a statutory duty for regulators to have regard to them, ‘while the duty to have due regard would require regulators to demonstrate that they had taken account of the principles, it may be the case that not every regulator will need to introduce measures to implement every principle’ (para 58). This conflates two issues. On the one hand, the need for activity subjected to regulatory supervision to comply with all principles and, on the other, the need for a regulator to take corrective action in relation to any of the principles. It should be clear that regulators have a duty to ensure that all principles are complied with in their regulatory remit, which does not seem to entirely or clearly follow from the weaker duty to have due regard to the principles.

perpetuating regulatory gaps, in particular regarding public sector digitalisation

As a consequence of the lack of creation of new regulators and the absence of new legislation, it is unclear whether the ‘regulatory strategy’ in the AI WP will have any real world effects within existing regulatory frameworks, especially as the most ambitious intervention is to create ‘a statutory duty on regulators requiring them to have due regard to the principles’ (para 12)—but the Government may decide not to introduce it if ‘monitoring of the effectiveness of the initial, non-statutory framework suggests that a statutory duty is unnecessary‘ (para 59).

However, what is already clear that there is no new AI regulation in the horizon despite the fact that the AI WP recognises that ‘some AI risks arise across, or in the gaps between, existing regulatory remits‘ (para 27), that ‘there may be AI-related risks that do not clearly fall within the remits of the UK’s existing regulators’ (para 64), and the obvious and worrying existence of high risks to fundamental rights and values (para 4 and paras 22-25). The AI WP is naïve, to say the least, in setting out that ‘[w]here prioritised risks fall within a gap in the legal landscape, regulators will need to collaborate with government to identify potential actions. This may include identifying iterations to the framework such as changes to regulators’ remits, updates to the Regulators’ Code, or additional legislative intervention’ (para 65).

Hoping that such risk identification and gap analysis will take place without assigning specific responsibility for it—and seeking to exempt the Government from such responsibility—seems a bit too much to ask. In fact, this is at odds with the graphic depiction of how the AI WP expects the system to operate. As noted in (1) in the graph below, it is clear that the identification of risks that are cross-cutting or new (unregulated) risks that warrant intervention is assigned to a ‘central risk function’ (more below), not the regulators. Importantly, the AI WP indicates that such central function ‘will be provided from within government’ (para 15 and below). Which then raises two questions: (a) who will have the responsibility to proactively screen for such risks, if anyone, and (b) how has the Government not already taken action to close the gaps it recognises exists in the current legal landscape?

AI WP Figure 2: Central risks function activities.

This perpetuates the current regulatory gaps, in particular in sectors without a regulator or with regulators with very narrow mandates—such as the public sector and, to a large extent, public services. Importantly, this approach does not create any prohibition of impermissible AI uses, nor sets any (workable) set of minimum requirements for the deployment of AI in high-risk uses, specially in the public sector. The contrast with the EU AI Act could not be starker and, in this aspect in particular, UK citizens should be very worried that the UK Government is not committing to any safeguards in the way technology can be used in eg determining access to public services, or by the law enforcement and judicial system. More generally, it is very worrying that the AI WP does not foresee any safeguards in relation to the quickly accelerating digitalisation of the public sector.

Loose central coordination leading to ai regulation privatisation

Remarkably, and in a similar functional disconnect as that of the GCSA Report (above), the decision not to create any new regulator/s (para 15) is taken in the same breath as the AI WP recognises that the small coordination layer within the regulatory architecture proposed in the 2022 AI regulation policy paper (ie, largely, the approach underpinning the DRCF) has been heavily criticised (para 13). The AI WP recognises that ‘the DRCF was not created to support the delivery of all the functions we have identified or the implementation of our proposed regulatory framework for AI’ (para 74).

The AI WP also stresses how ‘[w]hile some regulators already work together to ensure regulatory coherence for AI through formal networks like the AI and digital regulations service in the health sector and the Digital Regulation Cooperation Forum (DRCF), other regulators have limited capacity and access to AI expertise. This creates the risk of inconsistent enforcement across regulators. There is also a risk that some regulators could begin to dominate and interpret the scope of their remit or role more broadly than may have been intended in order to fill perceived gaps in a way that increases incoherence and uncertainty’ (para 29), which points at a strong functional need for a centralised approach to AI regulation.

To try and mitigate those regulatory risks and shortcomings, the AI WP proposes the creation of ‘a number of central support functions’, such as [i} a central monitoring function of overall regulatory framework’s effectiveness and the implementation of the principles; [ii] central risk monitoring and assessment; [iii] horizon scanning; [iv] supporting testbeds and sandboxes; [v] advocacy, education and awareness-raising initiatives; or [vi] promoting interoperability with international regulatory frameworks (para 14, see also para 73). Cryptically, the AI WP indicates that ‘central support functions will initially be provided from within government but will leverage existing activities and expertise from across the broader economy’ (para 15). Quite how this can be effectively done outwith a clearly defined, adequately resourced and durable institutional framework is anybody’s guess. In fact, the AI WP recognises that this approach ‘needs to evolve’ and that Government needs to understand how ‘existing regulatory forums could be expanded to include the full range of regulators‘, what ‘additional expertise government may need’, and the ‘most effective way to convene input from across industry and consumers to ensure a broad range of opinions‘ (para 77).

While the creation of a regulator seems a rather obvious answer to all these questions, the AI WP has rejected it in unequivocal terms. Is the AI WP a U-turn waiting to happen? Is the mention that ‘[a]s we enter a new phase we will review the role of the AI Council and consider how best to engage expertise to support the implementation of the regulatory framework’ (para 78) a placeholder for an imminent project to rejig the AI Council and turn it into an AI regulator? What is the place and role of the Office for AI and the Centre for Data Ethics and Innovation in all this?

Moreover, the AI WP indicates that the ‘proposed framework is aligned with, and supplemented by, a variety of tools for trustworthy AI, such as assurance techniques, voluntary guidance and technical standards. Government will promote the use of such tools’ (para 16). Relatedly, the AI WP relies on those mechanisms to avoid addressing issues of accountability across AI life cycle, indicating that ‘[t]ools for trustworthy AI like assurance techniques and technical standards can support supply chain risk management. These tools can also drive the uptake and adoption of AI by building justified trust in these systems, giving users confidence that key AI-related risks have been identified, addressed and mitigated across the supply chain’ (para 84). Those tools are discussed in much more detail in part 4 of the AI WP (paras 106 ff). Annex A also creates a backdoor for technical standards to directly become the operationalisation of the general principles on which the regulatory framework is based, by explicitly identifying standards regulators may want to consider ‘to clarify regulatory guidance and support the implementation of risk treatment measures’.

This approach to the offloading of tricky regulatory issues to the emergence of private-sector led standards is simply an exercise in the transfer of regulatory power to those setting such standards, guidance and assurance techniques and, ultimately, a privatisation of AI regulation.

A different approach to sandboxes and testbeds?

The Government will take forward the GCSA recommendation to establish a regulatory sandbox for AI, which ‘will bring together regulators to support innovators directly and help them get their products to market. The sandbox will also enable us to understand how regulation interacts with new technologies and refine this interaction where necessary’ (p2). This thus is bound to hardwire some of the issues mentioned above in relation to the GCSA proposal, as well as being reflective of the general pro-industry approach of the AI WP, which is obvious in the framing that the regulators are expected to ‘support innovators directly and help them get their products to market’. Industrial policy seems to be shoehorned and mainstreamed across all areas of regulatory activity, at least in relation to AI (but it can then easily bleed into non-AI-related regulatory activities).

While the AI WP indicates the commitment to implement the AI sandbox recommended in the GCSA Report, it is by no means clear that the implementation will be in the way proposed in the report (ie a multi-regulator sandbox nested under DRCF, with an expectation that it would develop a crucial coordination and regulatory centralisation effect). The AI WP indicates that the Government still has to explore ‘what service focus would be most useful to industry’ in relation to AI sandboxes (para 96), but it sets out the intention to ‘focus an initial pilot on a single sector, multiple regulator sandbox’ (para 97), which diverges from the approach in the GCSA Report, which would be that of a sandbox for ‘multiple sectors, multiple regulators’. While the public consultation intends to gather feedback on which industry sector is the most appropriate, I would bet that the financial services sector will be chosen and that the ‘regulatory innovation’ will simply result in some closer cooperation between the ICO and FCA.

Regulator capabilities — ai regulation on a shoestring?

The AI WP turns to the issue of regulator capabilities and stresses that ‘While our approach does not currently involve or anticipate extending any regulator’s remit, regulating AI uses effectively will require many of our regulators to acquire new skills and expertise’ (para 102), and that the Government has ‘identified potential capability gaps among many, but not all, regulators’ (para 103).

To try to (start to) address this fundamental issue in the context of a devolved and decentralised regulatory framework, the AI WP indicates that the Government will explore, for example, whether it is ‘appropriate to establish a common pool of expertise that could establish best practice for supporting innovation through regulatory approaches and make it easier for regulators to work with each other on common issues. An alternative approach would be to explore and facilitate collaborative initiatives between regulators – including, where appropriate, further supporting existing initiatives such as the DRCF – to share skills and expertise’ (para 105).

While the creation of ‘common regulatory capacity’ has been advocated by the Alan Turing Institute, and while this (or inter-regulator secondments, for example) could be a short term fix, it seems that this tries to address the obvious challenge of adequately resourcing regulatory bodies without a medium and long-term strategy to build up the digital capability of the public sector, and to perpetuate the current approach to AI regulation on a shoestring. The governance and organisational implications arising from the creation of common pool of expertise need careful consideration, in particular as some of the likely dysfunctionalities are only marginally smaller than current over-reliance on external consultants, or the ‘salami-slicing’ approach to regulatory and policy interventions that seems to bleed from the ’agile’ management of technological projects into the realm of regulatory activity, which however requires institutional memory and the embedding of knowledge and expertise.

Digital procurement, PPDS and multi-speed datafication -- some thoughts on the March 2023 PPDS Communication

The 2020 European strategy for data ear-marked public procurement as a high priority area for the development of common European data spaces for public administrations. The 2020 data strategy stressed that

Public procurement data are essential to improve transparency and accountability of public spending, fighting corruption and improving spending quality. Public procurement data is spread over several systems in the Member States, made available in different formats and is not easily possible to use for policy purposes in real-time. In many cases, the data quality needs to be improved.

To address those issues, the European Commission was planning to ‘Elaborate a data initiative for public procurement data covering both the EU dimension (EU datasets, such as TED) and the national ones’ by the end of 2020, which would be ‘complemented by a procurement data governance framework’ by mid 2021.

With a 2+ year delay, details for the creation of the public procurement data space (PPDS) were disclosed by the European Commission on 16 March 2023 in the PPDS Communication. The procurement data governance framework is now planned to be developed in the second half of 2023.

In this blog post, I offer some thoughts on the PPDS, its functional goals, likely effects, and the quickly closing window of opportunity for Member States to support its feasibility through an ambitious implementation of the new procurement eForms at domestic level (on which see earlier thoughts here).

1. The PPDS Communication and its goals

The PPDS Communication sets some lofty ambitions aligned with those of the closely-related process of procurement digitalisation, which the European Commission in its 2017 Making Procurement Work In and For Europe Communication already saw as not only an opportunity ‘to streamline and simplify the procurement process’, but also ‘to rethink fundamentally the way public procurement, and relevant parts of public administrations, are organised … [to seize] a unique chance to reshape the relevant systems and achieve a digital transformation’ (at 11-12).

Following the same rhetoric of transformation, the PPDS Communication now stresses that ‘Integrated data combined with the use of state-of the-art and emerging analytics technologies will not only transform public procurement, but also give new and valuable insights to public buyers, policy-makers, businesses and interested citizens alike‘ (at 2). It goes further to suggest that ‘given the high number of ecosystems concerned by public procurement and the amount of data to be analysed, the impact of AI in this field has a potential that we can only see a glimpse of so far‘ (at 2).

The PPDS Communication claims that this data space ‘will revolutionise the access to and use of public procurement data:

  • It will create a platform at EU level to access for the first time public procurement data scattered so far at EU, national and regional level.

  • It will considerably improve data quality, availability and completeness, through close cooperation between the Commission and Member States and the introduction of the new eForms, which will allow public buyers to provide information in a more structured way.

  • This wealth of data will be combined with an analytics toolset including advanced technologies such as Artificial Intelligence (AI), for example in the form of Machine Learning (ML) and Natural Language Processing (NLP).’

A first comment or observation is that this rhetoric of transformation and revolution not only tends to create excessive expectations on what can realistically be delivered by the PPDS, but can also further fuel the ‘policy irresistibility’ of procurement digitalisation and thus eg generate excessive experimentation or investment into the deployment of digital technologies on the basis of such expectations around data access through PPDS (for discussion, see here). Policy-makers would do well to hold off on any investments and pilot projects seeking to exploit the data presumptively pooled in the PPDS until after its implementation. A closer look at the PPDS and the significant roadblocks towards its full implementation will shed further light on this issue.

2. What is the PPDS?

Put simply, the PPDS is a project to create a single data platform to bring into one place ‘all procurement data’ from across the EU—ie both data on above threshold contracts subjected to mandatory EU-wide publication through TED (via eForms from October 2023), and data on below threshold contracts, which publication may be required by the domestic laws of the Member States, or entirely voluntary for contracting authorities.

Given that above threshold procurement data is already (in the process of being) captured at EU level, the PPDS is very much about data on procurement not covered by the EU rules—which represents 80% of all public procurement contracts. As the PPDS Communication stresses

To unlock the full potential of public procurement, access to data and the ability to analyse it are essential. However, data from only 20% of all call for tenders as submitted by public buyers is available and searchable for analysis in one place [ie TED]. The remaining 80% are spread, in different formats, at national or regional level and difficult or impossible to re-use for policy, transparency and better spending purposes. In order (sic) words, public procurement is rich in data, but poor in making it work for taxpayers, policy makers and public buyers.

The PPDS thus intends to develop a ‘technical fix’ to gain a view on the below-threshold reality of procurement across the EU, by ‘pulling and pooling’ data from existing (and to be developed) domestic public contract registers and transparency portals. The PPDS is thus a mechanism for the aggregation of procurement data currently not available in (harmonised) machine-readable and structured formats (or at all).

As the PPDS Communication makes clear, it consists of four layers:
(1) A user interface layer (ie a website and/or app) underpinned by
(2) an analytics layer, which in turn is underpinned by (3) an integration layer that brings together and minimally quality-assures the (4) data layer sourced from TED, Member State public contract registers (including those at sub-national level), and data from other sources (eg data on beneficial ownership).

The two top layers condense all potential advantages of the PPDS, with the analytics layer seeking to develop a ‘toolset including emerging technologies (AI, ML and NLP)‘ to extract data insights for a multiplicity of purposes (see below 3), and the top user interface seeking to facilitate differential data access for different types of users and stakeholders (see below 4). The two bottom layers, and in particular the data layer, are the ones doing all the heavy lifting. Unavoidably, without data, the PPDS risks being little more than an empty shell. As always, ‘no data, no fun’ (see below 5).

Importantly, the top three layers are centralised and the European Commission has responsibility (and funding) for developing them, while the bottom data layer is decentralised, with each Member State retaining responsibility for digitalising its public procurement systems and connecting its data sources to the PPDS. Member States are also expected to bear their own costs, although there is EU funding available through different mechanisms. This allocation of responsibilities follows the limited competence of the EU in this area of inter-administrative cooperation, which unfortunately heightens the risks of the PPDS becoming little more than an empty shell, unless Member States really take the implementation of eForms and the collaborative approach to the construction of the PPDS seriously (see below 6).

The PPDS Communication foresees a progressive implementation of the PPDS, with the goal of having ‘the basic architecture and analytics toolkit in place and procurement data published at EU level available in the system by mid-2023. By the end of 2024, all participating national publication portals would be connected, historic data published at EU level integrated and the analytics toolkit expanded. As of 2025, the system could establish links with additional external data sources’ (at 2). It will most likely be delayed, but that is not very important in the long run—especially as the already accrued delays are the ones that pose a significant limitation on the adequate rollout of the PPDS (see below 6).

3. PPDS’ expected functionality

The PPDS Communication sets expectations around the functionality that could be extracted from the PPDS by different agents and stakeholders.

For public buyers, in addition to reducing the burden of complying with different types of (EU-mandated) reporting, the PPDS Communication expects that ‘insights gained from the PPDS will make it much easier for public buyers to

  • team up and buy in bulk to obtain better prices and higher quality;

  • generate more bids per call for tenders by making calls more attractive for bidders, especially for SMEs and start-ups;

  • fight collusion and corruption, as well as other criminal acts, by detecting suspicious patterns;

  • benchmark themselves more accurately against their peers and exchange knowledge, for instance with the aim of procuring more green, social and innovative products and services;

  • through the further digitalisation and emerging technologies that it brings about, automate tasks, bringing about considerable operational savings’ (at 2).

This largely maps onto my analysis of likely applications of digital technologies for procurement management, assuming the data is there (see here).

The PPDS Communication also expects that policy-makers will ‘gain a wealth of insights that will enable them to predict future trends‘; that economic operators, and SMEs in particular, ‘will have an easy-to-use portal that gives them access to a much greater number of open call for tenders with better data quality‘, and that ‘Citizens, civil society, taxpayers and other interested stakeholders will have access to much more public procurement data than before, thereby improving transparency and accountability of public spending‘ (at 2).

Of all the expected benefits or functionalities, the most important ones are those attributed to public buyers and, in particular, the possibility of developing ‘category management’ insights (eg potential savings or benchmarking), systems of red flags in relation to corruption and collusion risks, and the automation of some tasks. However, unlocking most of these functionalities is not dependent on the PPDS, but rather on the existence of procurement data at the ‘right’ level.

For example, category management or benchmarking may be more relevant or adequate (as well as more feasible) at national than at supra-national level, and the development of systems of red flags can also take place at below-EU level, as can automation. Importantly, the development of such functionalities using pan-EU data, or data concerning more than one Member State, could bias the tools in a way that makes them less suited, or unsuitable, for deployment at national level (eg if the AI is trained on data concerning solely jurisdictions other than the one where it would be deployed).

In that regard, the expected functionalities arising from PPDS require some further thought and it can well be that, depending on implementation (in particular in relation to multi-speed datafication, as below 5), Member States are better off solely using domestic data than that coming from the PPDS. This is to say that PPDS is not a solid reality and that its enabling character will fluctuate with its implementation.

4. Differential procurement data access through PPDS

As mentioned above, the PPDS Communication stresses that ‘Citizens, civil society, taxpayers and other interested stakeholders will have access to much more public procurement data than before, thereby improving transparency and accountability of public spending’ (at 2). However, this does not mean that the PPDS will be (entirely) open data.

The Communication itself makes clear that ‘Different user categories (e.g. Member States, public buyers, businesses, citizens, NGOs, journalists and researchers) will have different access rights, distinguishing between public and non-public data and between participating Member States that share their data with the PPDS (PPDS members, …) and those that need more time to prepare’ (at 8). Relatedly, ‘PPDS members will have access to data which is available within the PPDS. However, even those Member States that are not yet ready to participate in the PPDS stand to benefit from implementing the principles below, due to their value for operational efficiency and preparing for a more evidence-based policy’ (at 9). This raises two issues.

First, and rightly, the Communication makes clear that the PPDS moves away from a model of ‘fully open’ or ‘open by default’ procurement data, and that access to the PPDS will require differential permissioning. This is the correct approach. Regardless of the future procurement data governance framework, it is clear that the emerging thicket of EU data governance rules ‘requires the careful management of a system of multi-tiered access to different types of information at different times, by different stakeholders and under different conditions’ (see here). This will however raise significant issues for the implementation of the PPDS, as it will generate some constraints or disincentives for an ambitions implementation of eForms at national level (see below 6).

Second, and less clearly, the PPDS Communication evidences that not all Member States will automatically have equal access to PPDS data. The design seems to be such that Member States that do not feed data into PPDS will not have access to it. While this could be conceived as an incentive for all Member States to join PPDS, this outcome is by no means guaranteed. As above (3), it is not clear that Member States will be better off—in terms of their ability to extract data insights or to deploy digital technologies—by having access to pan-EU data. The main benefit resulting from pan-EU data only accrues collectively and, primarily, by means of facilitating oversight and enforcement by the European Commission. From that perspective, the incentives for PPDS participation for any given Member State may be quite warped or internally contradictory.

Moreover, given that plugging into PPDS is not cost-free, a Member State that developed a data architecture not immediately compatible with PPDS may well wonder whether it made sense to shoulder the additional costs and risks. From that perspective, it can only be hoped that the existence of EU funding and technical support will be maximised by the European Commission to offload that burden from the (reluctant) Member States. However, even then, full PPDS participation by all Member States will still not dispel the risk of multi-speed datafication.

5. No data, no fun — and multi-speed datafication

Related to the risk that some EU Member States will become PPDS members and others not, there is a risk (or rather, a reality) that not all PPDS members will equally contribute data—thus creating multi-speed datafication, even within the Member States that opt in to the PPDS.

First, the PPDS Communication makes it clear that ‘Member States will remain in control over which data they wish to share with the PPDS (beyond the data that must be published on TED under the Public Procurement Directives)‘ (at 7), It further specifies that ‘With the eForms, it will be possible for the first time to provide data in notices that should not be published, or not immediately. This is important to give assurance to public buyers that certain data is not made publicly available or not before a certain point in time (e.g. prices)’ (at 7, fn 17).

This means that each Member State will only have to plug whichever data it captures and decides to share into PPDS. It seems plain to see that this will result in different approaches to data capture, multiple levels of granularity, and varying approaches to restricting access to the date in the different Member States, especially bearing in mind that ‘eForms are not an “off the shelf” product that can be implemented only by IT developers. Instead, before developers start working, procurement policy decision-makers have to make a wide range of policy decisions on how eForms should be implemented’ in the different Member States (see eForms Implementation Handbook, at 9).

Second, the PPDS Communication is clear (in a footnote) that ‘One of the conditions for a successful establishment of the PPDS is that Member States put in place automatic data capture mechanisms, in a first step transmitting data from their national portals and contract registers’ (at 4, fn 10). This implies that Member States may need to move away from manually inputted information and that those seeking to create new mechanisms for automatic procurement data capture can take an incremental approach, which is very much baked into the PPDS design. This relates, for example, to the distinction between pre- and post-award procurement data, with pre-award data subjected to higher demands under EU law. It also relates to above and below threshold data, as only above threshold data is subjected to mandatory eForms compliance.

In the end, the extent to which a (willing) Member State will contribute data to the PPDS depends on its decisions on eForms implementation, which should be well underway given the October 2023 deadline for mandatory use (for above threshold contracts). Crucially, Member States contributing more data may feel let down when no comparable data is contributed to PPDS by other Member States, which can well operate as a disincentive to contribute any further data, rather than as an incentive for the others to match up that data.

6. Ambitious eForms implementation as the PPDS’ Achilles heel

As the analysis above has shown, the viability of the PPDS and its fitness for purpose (especially for EU-level oversight and enforcement purposes) crucially depends on the Member States deciding to take an ambitious approach to the implementation of eForms, not solely by maximising their flexibility for voluntary uses (as discussed here) but, crucially, by extending their mandatory use (under national law) to all below threshold procurement. It is now also clear that there is a need for as much homogeneity as possible in the implementation of eForms in order to guarantee that the information plugged into PPDS is comparable—which is an aspect of data quality that the PPDS Communication does not seem to have at all considered).

It seems that, due to competing timings, this poses a bit of a problem for the rollout of the PPDS. While eForms need to be fully implemented domestically by October 2023, the PPDS Communication suggests that the connection of national portals will be a matter for 2024, as the first part of the project will concern the top two layers and data connection will follow (or, at best, be developed in parallel). Somehow, it feels like the PPDS is being built without a strong enough foundation. It would be a shame (to put it mildly) if Member States having completed a transition to eForms by October 2023 were dissuaded from a second transition into a more ambitious eForms implementation in 2024 for the purposes of the PPDS.

Given that the most likely approach to eForms implementation is rather minimalistic, it can well be that the PPDS results in not much more than an empty shell with fancy digital analytics limited to very superficial uses. In that regard, the two-year delay in progressing the PPDS has created a very narrow (and quickly dwindling) window of opportunity for Member States to engage with an ambitions process of eForms implementation

7. Final thoughts

It seems to me that limited and slow progress will be attained under the PPDS in coming years. Given the undoubted value of harnessing procurement data, I sense that Member States will progress domestically, but primarily in specific settings such as that of their central purchasing bodies (see here). However, whether they will be onboarded into PPDS as enthusiastic members seems less likely.

The scenario seems to resemble limited voluntary cooperation in other areas (eg interoperability; for discussion see here). It may well be that the logic of EU competence allocation required this tentative step as a first move towards a more robust and proactive approach by the Commission in a few years, on grounds that the goal of creating the European data space could not be achieved through this less interventionist approach.

However, given the speed at which digital transformation could take place (and is taking place in some parts of the EU), and the rhetoric of transformation and revolution that keeps being used in this policy area, I can’t but feel let down by the approach in the PPDS Communication, which started with the decision to build the eForms on the existing regulatory framework, rather than more boldly seeking a reform of the EU procurement rules to facilitate their digital fitness.

Should FTAs open and close (or only open) international procurement markets?

I have recently had some interesting discussions on the role of Free Trade Agreements (FTAs) in liberalising procurement-related international trade. The standard analysis is that FTAs serve the purpose of reciprocally opening up procurement markets to the economic operators of the signatory parties, and that the parties negotiate them on the basis of reciprocal concessions so that the market access given by A to economic operators from B roughly equates that given by B to economic operators from A (or is offset by other concessions from B in other chapters of the FTA with an imbalance in A’s favour).

Implicitly, this analysis assumes that A’s and B’s markets are (relatively) closed to third party economic operators, and that they will remain as such. The more interesting question is, though, whether FTAs should also close procurement markets to non-signatory parties in order to (partially) protect the concessions mutually granted, as well as to put pressure for further procurement-related trade liberalisation.

Let’s imagine that A, a party with several existing FTAs with third countries covering procurement, manages to negotiate the first FTA signed by B liberalising the latter’s procurement markets. It could seem that economic operators from A would have preferential access to B’s markets over any other economic operators (other than B’s, of course).

However, it can well be that, in practice, once the protectionist boat has sailed, B decides to entertain tenders coming from economic operators from C, D … Z for, once B’s domestic industries are not protected, B’s public buyers may well want to browse through the entire catalogue offered by the world market—especially if A does not have the most advanced industry for a specific type of good, service or technology (and I have a hunch this may well be a future scenario concerning digital technologies and AI in particular).

A similar issue can well arise where B already has other FTAs covering procurement and this generates a situation where it is difficult or complex for B’s public buyers to assess whether an economic operator from X does or not have guaranteed market access under the existing FTAs, which can well result in B’s public buyers granting access to economic operators from any origin to avoid legal risks resulting from an incorrect analysis of the existing legal commitments (once open for some, de facto open for all).

I am sure there are more situations where the apparent preferential access given by B to A in the notional FTA can be quickly eroded despite assumptions on how international trade liberalisation operates under FTAs. This thus begs the question whether A should include in its FTAs a clause binding B (and itself!) to unequal treatment (ie exclusion) of economic operators not covered by FTAs (either existing or future) or multilateral agreements. In that way, the concessions given by B to A may be more meaningful and long-lasting, or at least put pressure on third countries to participate in bilateral (and multilateral — looking at the WTO GPA) procurement-related liberalisation efforts.

In the EU’s specific case, the adoption of such requirements in its FTAs covering procurement would be aligned with the policy underlying the 2019 guidelines on third country access to procurement markets, the International Procurement Instrument, and the Foreign Subsidies Regulation.

It may be counter-intuitive that instruments of trade liberalisation should seek to close (or rather keep closed) some markets, but I think this is an issue where FTAs could be used more effectively not only to bilaterally liberalise trade, but also to generate further dynamics of trade liberalisation—or at least to avoid the erosion of bilateral commitments in situations of regulatory complexity or market dynamics pushing for ‘off-the-books’ liberalisation through the practical acceptance of tenders coming from anywhere.

This is an issue I would like to explore further after my digital tech procurement book, so I would be more than interested in thoughts and comments!