Public Procurement of Artificial Intelligence: recent developments and remaining challenges in EU law

Now that the (more than likely) final of the EU AI Act is available, and building on the analysis of my now officially published new monograph Digital Technologies and Public Procurement (OUP 2024), I have put together my assessment of its impact for the procurement of AI under EU law and uploaded on SSRN the new paper: ‘Public Procurement of Artificial Intelligence: recent developments and remaining challenges in EU law’. The abstract is as follows:

EU Member States are increasingly experimenting with Artificial Intelligence (AI), but the acquisition and deployment of AI by the public sector is currently largely unregulated. This puts public procurement in the awkward position of a regulatory gatekeeper—a role it cannot effectively carry out. This article provides an overview of recent EU developments on the public procurement of AI. It reflects on the narrow scope of application and questionable effectiveness of tools linked to the EU AI Act, such as technical standards or model contractual clauses, and highlights broader challenges in the use of procurement law and practice to regulate the adoption and use of ‘trustworthy’ AI by the public sector. The paper stresses the need for an alternative regulatory approach.

The paper can be freely downloaded: A Sanchez-Graells, ‘Public Procurement of Artificial Intelligence: recent developments and remaining challenges in EU law’ (January 25, 2024). To be published in LTZ (Legal Tech Journal) 2/2024: https://ssrn.com/abstract=4706400.

As this will be an area of contention and continuous developments, comments most welcome!

Source: h

Governing the Assessment and Taking of Risks in Digital Procurement Governance

In a previous blog post, I explored the main governance risks and legal obligations arising from the adoption of digital technologies, which revolve around data governance, algorithmic transparency, technological dependency, technical debt, cybersecurity threats, the risks stemming from the long-term erosion of the skills base in the public sector, and difficult trade-offs due to the uncertainty surrounding immature and still changing technologies within an also evolving regulatory framework. To address such risks and ensure compliance with the relevant governance obligations, I stressed the need to embed a comprehensive mechanism of risk assessment in the process of technological adoption.

In a new draft chapter (num 9) for my book project, I analyse how to embed risk assessments in the initial stages of decision-making processes leading to the adoption of digital solutions for procurement governance, and how to ensure that they are iterated throughout the lifecycle of use of digital technologies. To do so, I critically review the model of AI risk regulation that is emerging in the EU and the UK, which is based on self-regulation and self-assessment. I consider its shortcomings and how to strengthen the model, including the possibility of subjecting the process of technological adoption to external checks. The analysis converges with a broader proposal for institutionalised regulatory checks on the adoption of digital technologies by the public sector that I will develop more fully in another part of the book.

This post provides a summary of my main findings, on which I will welcome any comments: a.sanchez-graells@bristol.ac.uk. The full draft chapter is free to download: A Sanchez-Graells, ‘Governing the Assessment and Taking of Risks in Digital Procurement Governance’ to be included in A Sanchez-Graells, Digital Technologies and Public Procurement. Gatekeeping and experimentation in digital public governance (OUP, forthcoming), Available at SSRN: https://ssrn.com/abstract=4282882.

AI Risk Regulation

The emerging (global) model of AI regulation is risk-based—as opposed to a strict precautionary approach. This implies an assumption that ‘a technology will be adopted despite its harms’. This primarily means accepting that technological solutions may (or will) generate (some) negative impacts on public and private interests, even if it is not known when or how those harms will arise, or how extensive they will be. AI are unique, as they are ‘long-term, low probability, systemic, and high impact’, and ‘AI both poses “aggregate risks” across systems and low probability but “catastrophic risks to society”’ [for discussion, see Margot E Kaminski, ‘Regulating the risks of AI’ (2023) 103 Boston University Law Review, forthcoming]

This should thus trigger careful consideration of the ultimate implications of AI risk regulation, and advocates in favour of taking a robust regulatory approach—including to the governance of the risk regulation mechanisms put in place, which may well require external controls, potentially by an independent authority. By contrast, the emerging model of AI risk regulation in the context of procurement digitalisation in the EU and the UK leaves the adoption of digital technologies by public buyers largely unregulated and only subject to voluntary measures, or to open-ended obligations in areas without clear impact assessment standards (which reduces the prospect of effective mandatory enforcement).

Governance of Procurement Digitalisation in the EU

Despite the emergence of a quickly expanding set of EU digital law instruments imposing a patchwork of governance obligations on public buyers, whether or not they adopt digital technologies (see here), the primary decision whether to adopt digital technologies is not subject to any specific constraints, and the substantive obligations that follow from the diverse EU law instruments tend to refer to open-ended standards that require advanced technical capabilities to operationalise them. This would not be altered by the proposed EU AI Act.

Procurement-related AI uses are classified as minimal risk under the EU AI Act, which leaves them subject only to voluntary self-regulation via codes of conduct—yet to be developed. Such codes of conduct should encourage voluntary compliance with the requirements applicable to high-risk AI uses—such as risk management systems, data and data governance requirements, technical documentation, record-keeping, transparency, or accuracy, robustness and cybersecurity requirements—‘on the basis of technical specifications and solutions that are appropriate means of ensuring compliance with such requirements in light of the intended purpose of the systems.’ This seems to introduce a further element of proportionality or ‘adaptability’ requirement that could well water down the requirements applicable to minimal risk AI uses.

Importantly, while it is possible for Member States to draw such codes of conduct, the EU AI Act would pre-empt Member States from going further and mandating compliance with specific obligations (eg by imposing a blanket extension of the governance requirements designed for high-risk AI uses) across their public administrations. The emergent EU model is thus clearly limited to the development of voluntary codes of conduct and their likely content, while yet unknown, seems unlikely to impose the same standards applicable to the adoption of high-risk AI uses.

Governance of Procurement Digitalisation in the UK

Despite its deliberate light-touch approach to AI regulation and actively seeking to deviate from the EU, the UK is relatively advanced in the formulation of voluntary standards to govern procurement digitalisation. Indeed, the UK has adopted guidance for the use of AI in the public sector, and for AI procurement, and is currently piloting an algorithmic transparency standard (see here). The UK has also adopted additional guidance in the Digital, Data and Technology Playbook and the Technology Code of Practice. Remarkably, despite acknowledging the need for risk assessments—and even linking their conduct to spend approvals required for the acquisition of digital technologies by central government organisations—none of these instruments provides clear standards on how to assess (and mitigate) risks related to the adoption of digital technologies.

Thus, despite the proliferation of guidance documents, the substantive assessment of governance risks in digital procurement remains insufficiently addressed and left to undefined risk assessment standards and practices. The only exception concerns cyber security assessments, given the consolidated approach and guidance of the National Cyber Security Centre. This lack of precision in the substantive requirements applicable to data and algorithmic impact assessments clearly constrains the likely effectiveness of the UK’s approach to embedding technology-related impact assessments in the process of adoption of digital technologies for procurement governance (and, more generally, for public governance). In the absence of clear standards, data and algorithmic impact assessments will lead to inconsistent approaches and varying levels of robustness. The absence of standards will also increase the need to access specialist expertise to design and carry out the assessments. Developing such standards and creating an effective institutional mechanism to ensure compliance therewith thus remain a challenge.

The Need for Strengthened Digital Procurement Governance

Both in the EU and the UK, the emerging model of AI risk regulation leaves digital procurement governance to compliance with voluntary measures such as (future) codes of conduct or transparency standards or impose open-ended obligations in areas without clear standards (which reduces the prospect of effective mandatory enforcement). This follows general trends of AI risk regulation and evidences the emergence of a (sub)model highly dependent on self-regulation and self-assessment. This approach is rather problematic.

Self-Regulation: Outsourcing Impact Assessment Regulation to the Private Sector

The absence of mandatory standards for data and algorithmic impact assessments, as well as the embedded flexibility in the standards for cyber security, are bound to outsource the setting of the substantive requirements for those impact assessments to private vendors offering solutions for digital procurement governance. With limited public sector digital capability preventing a detailed specification of the applicable requirements, it is likely that these will be limited to a general obligation for tenderers to provide an impact assessment plan, perhaps by reference to emerging (international private) standards. This would imply the outsourcing of standard setting for risk assessments to private standard-setting organisations and, in the absence of those standards, to the tenderers themselves. This generates a clear and problematic risk of regulatory capture. Moreover, this process of outsourcing or excessively reliance on private agents to commercially determine impact assessments requirements is not sufficiently exposed to scrutiny and contestation.

Self-Assessment: Inadequacy of Mechanisms for Contestability and Accountability

Public buyers will rarely develop the relevant technological solutions but rather acquire them from technological providers. In that case, the duty to carry out the self-assessment will (or should be) cascaded down to the technology provider through contractual obligations. This would place the technology provider as ‘first party’ and the public buyer as ‘second party’ in relation to assuring compliance with the applicable obligations. In a setting of limited public sector digital capability, and in part as a result of a lack of clear standards providing an applicable benchmark (as above), the self-assessment of compliance with risk management requirements will either be de facto outsourced to private vendors (through a lack of challenge of their practices), or carried out by public buyers with limited capabilities (eg during the oversight of contract implementation). Even where public buyers have the required digital capabilities to carry out a more thorough analysis, they lack independence. ‘Second party’ assurance models unavoidably raise questions about their integrity due to the conflicting interests of the assurance provider who wants to use the system (ie the public buyer).

This ‘second party’ assurance model does not include adequate challenge mechanisms despite efforts to disclose (parts of) the relevant self-assessments. Such disclosures are constrained by general problems with ‘comply or explain’ information-based governance mechanisms, with the emerging model showing design features that have proven problematic in other contexts (such as corporate governance and financial market regulation). Moreover, there is no clear mechanism to contest the decisions to adopt digital technologies revealed by the algorithmic disclosures. In many cases, shortcomings in the risk assessments and the related minimisation and mitigation measures will only become observable after the materialisation of the underlying harms. For example, the effects of the adoption of a defective digital solution for decision-making support (eg a recommender system) will only emerge in relation to challengeable decisions in subsequent procurement procedures that rely on such solution. At that point, undoing the effects of the use of the tool may be impossible or excessively costly. In this context, challenges based on procedure-specific harms, such as the possibility to challenge discrete procurement decisions under the general rules on procurement remedies, are inadequate. Not least, because there can be negative systemic harms that are very hard to capture in the challenge to discrete decisions, or for which no agent with active standing has adequate incentives. To avoid potential harms more effectively, ex ante external controls are needed instead.

Creating External Checks on Procurement Digitalisation

It is thus necessary to consider the creation of external ex ante controls applicable to these decisions, to ensure an adequate embedding of effective risk assessments to inform (and constrain) them. Two models are worth considering: certification schemes and independent oversight.

Certification or Conformity Assessments

While not applicable to procurement uses, the model of conformity assessment in the proposed EU AI Act offers a useful blueprint. The main potential shortcoming of conformity assessment systems is that they largely rely on self-assessments by the technology vendors, and thus on first party assurance. Third-party certification (or algorithmic audits) is possible, but voluntary. Whether there would be sufficient (market) incentives to generate a broad (voluntary) use of third-party conformity assessments remains to be seen. While it could be hoped that public buyers could impose the use of certification mechanisms as a condition for participation in tender procedures, this is a less than guaranteed governance strategy given the EU procurement rules’ functional approach to the use of labels and certificates—which systematically require public buyers to accept alternative means of proof of compliance. This thus seems to offer limited potential for (voluntary) certification schemes in this specific context.

Relatedly, the conformity assessment system foreseen in the EU AI Act is also weakened by its reliance on vague concepts with non-obvious translation into verifiable criteria in the context of a third-party assurance audit. This can generate significant limitations in the conformity assessment process. This difficulty is intended to be resolved through the development of harmonised standards by European standardisation organisations and, where those do not exist, through the approval by the European Commission of common specifications. However, such harmonised standards will largely create the same risks of commercial regulatory capture mentioned above.

Overall, the possibility of relying on ‘third-party’ certification schemes offers limited advantages over the self-regulatory approach.

Independent External Oversight

Moving beyond the governance limitations of voluntary third-party certification mechanisms and creating effective external checks on the adoption of digital technologies for procurement governance would require external oversight. An option would be to make the envisaged third-party conformity assessments mandatory, but that would perpetuate the risks of regulatory capture and the outsourcing of the assurance system to private parties. A different, preferable option would be to assign the approval of the decisions to adopt digital technologies and the verification of the relevant risks assessments to a centralised authority also tasked with setting the applicable requirements therefor. The regulator would thus be placed as gatekeeper of the process of transition to digital procurement governance, instead of the atomised imposition of this role on public buyers. This would be reflective of the general features of the system of external controls proposed in the US State of Washington’s Bill SB 5116 (for discussion, see here).

The main goal would be to introduce an element of external verification of the assessment of potential AI harms and the related taking of risks in the adoption of digital technologies. It is submitted that there is a need for the regulator to be independent, so that the system fully encapsulates the advantages of third-party assurance mechanisms. It is also submitted that the data protection regulator may not be best placed to take on the role as its expertise—even if advanced in some aspects of data-intensive digital technologies—primarily relates to issues concerning individual rights and their enforcement. The more diffuse collective interests at stake in the process of transition to a new model of public digital governance (not only in procurement) would require a different set of analyses. While reforming data protection regulators to become AI mega-regulators could be an option, that is not necessarily desirable and it seems that an easier to implement, incremental approach would involve the creation of a new independent authority to control the adoption of AI in the public sector, including in the specific context of procurement digitalisation.

Conclusion

An analysis of emerging regulatory approaches in the EU and the UK shows that the adoption of digital technologies by public buyers is largely unregulated and only subjected to voluntary measures, or to open-ended obligations in areas without clear standards (which reduces the prospect of effective mandatory enforcement). The emerging model of AI risk regulation in the EU and UK follows more general trends and points at the consolidation of a (sub)model of risk-based digital procurement governance that strongly relies on self-regulation and self-assessment.

However, given its limited digital capabilities, the public sector is not best placed to control or influence the process of self-regulation, which results in the outsourcing of crucial regulatory tasks to technology vendors and the consequent risk of regulatory capture and suboptimal design of commercially determined governance mechanisms. These risks are compounded by the emerging ‘second party assurance’ model, as self-assessments by technology vendors would not be adequately scrutinised by public buyers, either due to a lack of digital capabilities or the unavoidable structural conflicts of interest of assurance providers with an interest in the use of the technology, or both. This ‘second party’ assurance model does not include adequate challenge mechanisms despite efforts to disclose (parts of) the relevant self-assessments. Such disclosures are constrained by general problems with ‘comply or explain’ information-based governance mechanisms, with the emerging model showing design features that have proven problematic in other contexts (such as corporate governance and financial market regulation). Moreover, there is no clear mechanism to contest the decisions revealed by the disclosures, including in the context of (delayed) specific uses of the technological solutions.

The analysis also shows how a model of third-party assurance or certification would be affected by the same issues of outsourcing of regulatory decisions to private parties, and ultimately would largely replicate the shortcomings of the self-regulatory and self-assessed model. A certification model would thus only generate a marginal improvement over the emerging model—especially given the functional approach to the use of certification and labels in procurement.

Moving past these shortcomings requires assigning the approval of decisions whether to adopt digital technologies and the verification of the related impact assessments to an independent authority: the ‘AI in the Public Sector Authority’ (AIPSA). I will fully develop a proposal for such authority in coming months.

Emerging risks in digital procurement governance

In a previous blog post, I drew a technology-informed feasibility boundary to assess the realistic potential of digital technologies in the specific context of procurement governance. I suggested that the potential benefits from the adoption of digital technologies within that feasibility boundary had to be assessed against new governance risks and requirements for their mitigation.

In a new draft chapter (num 8) for my book project, I now explore the main governance risks and legal obligations arising from the adoption of digital technologies, which revolve around data governance, algorithmic transparency, technological dependency, technical debt, cybersecurity threats, the risks stemming from the long-term erosion of the skills base in the public sector, and difficult trade-offs due to the uncertainty surrounding immature and still changing technologies within an also evolving regulatory framework.

The analysis is not carried out in a vacuum, but in relation to the increasingly complex framework of EU digital law, including: the Open Data Directive; the Data Governance Act; the proposed Data Act; the NIS 2 Directive on cybersecurity measures, including its interaction with the Cybersecurity Act, and the proposed Directive on the resilience of critical entities and Cyber Resilience Act; as well as some aspects of the proposed EU AI Act.

This post provides a summary of my main findings, on which I will welcome any comments: a.sanchez-graells@bristol.ac.uk. The full draft chapter is free to download: A Sanchez-Graells, ‘Identifying Emerging Risks in Digital Procurement Governance’ to be included in A Sanchez-Graells, Digital Technologies and Public Procurement. Gatekeeping and experimentation in digital public governance (OUP, forthcoming). Available at SSRN: https://ssrn.com/abstract=4254931.

current and Imminent digital governance obligations for public buyers

Public buyers already shoulder, and will very soon face further digital governance obligations, even if they do not directly engage with digital technologies. These concern both data governance and cybersecurity obligations.

Data governance obligations

The Open Data Directive imposes an obligation to facilitate access to and re-use of procurement data for commercial or non-commercial purposes, and generates the starting position that data held by public buyers needs to be made accessible. Access is however excluded in relation to data subject to third-party rights, such as data protected by intellectual property rights (IPR), or data subject to commercial confidentiality (including business, professional, or company secrets). Moreover, in order to ensure compliance with the EU procurement rules, access should also be excluded to data subject to procurement-related confidentiality (Art 21 Dir 2014/24/EU), and data which disclosure should be withheld because the release of such information would impede law enforcement or would otherwise be contrary to the public interest … or might prejudice fair competition between economic operators (Art 55 Dir 2014/24/EU). Compliance with the Open Data Directive can thus not result in a system where all procurement data becomes accessible.

The Open Data Directive also falls short of requiring that access is facilitated through open data, as public buyers are under no active obligation to digitalise their information and can simply allow access to the information they hold ‘in any pre-existing format or language’. However, this will change with the entry into force of the rules on eForms (see here). eForms will require public buyers to hold (some) procurement information in digital format. This will trigger the obligation under the Open Data Directive to make that information available for re-use ‘by electronic means, in formats that are open, machine-readable, accessible, findable and re-usable, together with their metadata’. Moreover, procurement data that is not captured by the eForms but in other ways (eg within the relevant e-procurement platform) will also be subject to this regime and, where making that information available for re-use by electronic means involves no ‘disproportionate effort, going beyond a simple operation’, it is plausible that the obligation of publication by electronic means will extend to such data too. This will potentially significantly expand the scope of open procurement data obligations, but it will be important to ensure that it does not result in excessive disclosure of third-party data or competition-sensitive data.

Some public buyers may want to go further in facilitating (controlled) access to procurement data not susceptible of publication as open data. In that case, they will have to comply with the requirements of the Data Governance Act (and the Data Act, if adopted). In this case, they will need to ensure that, despite authorising access to the data, ‘the protected nature of data is preserved’. In the case of commercially confidential information, including trade secrets or content protected by IPR, this can require ensuring that the data has been ‘modified, aggregated or treated by any other method of disclosure control’. Where ‘anonymising’ information is not possible, access can only be given with permission of the third-party, and in compliance with the applicable IPR, if any. The Data Governance Act explicitly imposes liability on the public buyer if it breaches the duty not to disclose third-party data, and it also explicitly requires that data access complies with EU competition law.

This shows that public buyers have an inescapable data governance role that generates tensions in the design of open procurement data mechanisms. It is simply not possible to create a system that makes all procurement data open. Data governance requires the careful management of a system of multi-tiered access to different types of information at different times, by different stakeholders and under different conditions (as I already proposed a few years ago, see here). While the need to balance procurement transparency and the protection of data subject to the rights of others and competition-sensitive data is not a new governance challenge, the digital management of this information creates heightened risks to the extent that the implementation of data management solutions is tendentially open access. Moreover, the assessment of the potential competition impact of data disclosure can be a moving target. The risk of distortions of competition is heightened by the possibility that the availability of data allows for the deployment of technology-supported forms of collusive behaviour (as well as corrupt behaviour).

Cybersecurity obligations

Most public buyers will face increased cybersecurity obligations once the NIS 2 Directive enters into force. The core substantive obligation will be a mandate to ‘take appropriate and proportionate technical, operational and organisational measures to manage the risks posed to the security of network and information systems which those entities use for their operations or for the provision of their services, and to prevent or minimise the impact of incidents on recipients of their services and on other services’. This will require a detailed assessment of what is proportionate to the cybersecurity exposure of a public buyer.

In that analysis, the public buyer will be able to take into account ‘the state of the art and, where applicable, relevant European and international standards, as well as the cost of implementation’, and in ‘assessing the proportionality of those measures, due account shall be taken of the degree of the entity’s exposure to risks, its size, the likelihood of occurrence of incidents and their severity, including their societal and economic impact’.

Public buyers may not have the ability to carry out such an assessment with internal capabilities, which immediately creates a risk of outsourcing of the cybersecurity risk assessment, as well as other measures to comply with the related substantive obligations. This can generate further organisational dependency on outside capability, which can itself be a cybersecurity risk. As discussed below, imminent cybersecurity obligations heighten the need to close the current gaps in digital capability.

Increased governance obligations for public buyers ‘going digital’

Public buyers that are ‘going digital’ and experimenting with or deploying digital solutions face increased digital governance obligations. Given the proportionality of the cybersecurity requirements under the NIS 2 Directive (above), public buyers that use digital technologies can expect to face more stringent substantive obligations. Moreover, the adoption of digital solutions generates new or increased risks of technological dependency, of two main types. The first type refers to vendor lock-in and interoperability, and primarily concerns the increasing need to develop advanced strategies to manage IPR, algorithmic transparency, and technical debt—which could largely be side-stepped by an ‘open source by default’ approach. The second concerns the erosion of the skills base of the public buyer as technology replaces the current workforce, which generates intellectual debt and operational dependency.

Open Source by Default?

The problem of technological lock-in is well understood, even if generally inadequately or insufficiently managed. However, the deployment of Artificial Intelligence (AI), and Machine Learning (ML) in particular, raise the additional issue of managing algorithmic transparency in the context of technological dependency. This generates specific challenges in relation with the administration of public contracts and the obligation to create competition in their (re)tendering. Without access to the algorithm’s source code, it is nigh impossible to ensure a level playing field in the tender of related services, as well as in the re-tendering of the original contract for the specific ML or AI solution. This was recognised by the CJEU in a software procurement case (see here), which implies that, under EU law, public buyers are under an obligation to ensure that they have access and dissemination rights over the source code. This goes beyond emerging standards on algorithmic transparency, such as the UK’s, or what would be required if the EU AI Act was applicable, as reflected in the draft contract clauses for AI procurement. This creates a significant governance risk that requires explicit and careful consideration by public buyers, and which points at the need of embedding algorithmic transparency requirements as a pillar of technological governance related to the digitalisation of procurement.

Moreover, the development of digital technologies also creates a new wave of lock-in risks, as digital solutions are hardly off-the-shelf and can require a high level of customisation or co-creation between the technology provider and the public buyer. This creates the need for careful consideration of the governance of IPR allocation—with some of the guidance seeking to promote leaving IPR rights with the vendor needing careful reconsideration. A nuanced approach is required, as well as coordination with other legal regimes (eg State aid) where IPR is left with the contractor. Following some recent initiatives by the European Commission, an ‘open source by default’ approach would be suitable, as there can be high value derived from using and reusing common solutions, not only in terms of interoperability and a reduction of total development costs—but also in terms of enabling the emergence of communities of practice that can contribute to the ongoing improvement of the solutions on the basis of pooled resources, which can in turn mitigate some of the problems arising from limited access to digital skills.

Finally, it should be stressed that most of these technologies are still emergent or immature, which generates additional governance risks. The adoption of such emergent technologies generates technical debt. Technical debt is not solely a financial issue, but a structural barrier to digitalisation. Technical debt risks stress the importance of the adoption of the open source by default approach mentioned above, as open source can facilitate the progressive collective repayment of technical debt in relation to widely adopted solutions.

(Absolute) technological dependency

As mentioned, a second source of technological dependency concerns the erosion of the skills base of the public buyer as technology replaces the current workforce. This is different from dependence on a given technology (as above), and concerns dependence on any technological solution to carry out functions previously undertaken by human operators. This can generate two specific risks: intellectual debt and operational dependency.

In this context, intellectual debt refers to the loss of institutional knowledge and memory resulting from eg the participation in the development and deployment of the technological solutions by agents no longer involved with the technology (eg external providers). There can be many forms of intellectual debt risk, and some can be mitigated or excluded through eg detailed technical documentation. Other forms of intellectual debt risk, however, are more difficult to mitigate. For example, situations where reliance on a technological solution (eg robotic process automation, RPA) erases institutional knowledge of the reason why a specific process is carried out, as well as how that process is carried out (eg why a specific source of information is checked for the purposes of integrity screening and how that is done). Mitigating against this requires keeping additional capability and institutional knowledge (and memory) to be able to explain in full detail what specific function the technology is carrying out, why, how that is done, and how that would be done in the absence of the technology (if it could be done at all). To put it plainly, it requires keeping the ability to ‘do it by hand’—or at the very least to be able to explain how that would be done.

Where it would be impossible or unfeasible to carry out the digitised task without using technology, digitalisation creates absolute operational dependency. Mitigating against such operational dependency requires an assessment of ‘system critical’ technological deployments without which it is not possible to carry out the relevant procurement function and, most likely, to deploy measures to ensure system resilience (including redundancy if appropriate) and system integrity (eg in relation to cybersecurity, as above). It is however important to acknowledge that there will always be limits to ensuring system resilience and integrity, which should raise questions about the desirability of generating situations of absolute operational dependency. While this may be less relevant in the context of procurement governance than in other contexts, it can still be an important consideration to factor into decision-making as technological practice can fuel a bias towards (further) technological practice that can then help support unquestioned technological expansion. In other words, it will be important to consider what are the limits of absolute technological delegation.

The crucial need to boost in-house digital skills in the public sector

The importance of digital capabilities to manage technological governance risks emerges a as running theme. The specific governance risks identified in relation to data and systems integrity, including cybersecurity risks, as well as the need to engage in sophisticated management of data and IPR, show that skills shortages are problematic in the ongoing use and maintenance of digital solutions, as their implementation does not diminish, but rather expands the scope of technology-related governance challenges.

There is an added difficulty in the fact that the likelihood of materialisation of those data, systems integrity, and cybersecurity risks grows with reduced digital capabilities, as the organisation using digital solutions may be unable to identify and mitigate them. It is not only that the technology carries risks that are either known knowns or known unknowns (as above), but also that the organisation may experience them as unknown unknowns due to its limited digital capability. Limited digital skills compound those governance risks.

There is a further risk that digitalisation and the related increase in digital capability requirements can embed an element of (unacknowledged) organisational exposure that mirrors the potential benefits of the technologies. While technology adoption can augment the organisation’s capability (eg by reducing administrative burdens through automation), this also makes the entire organisation dependent on its (disproportionately small) digital capabilities. This makes the organisation particularly vulnerable to the loss of limited capabilities. From a governance perspective, this places sustainable access to digital skills as a crucial element of the critical vulnerabilities and resilience assessment that should accompany all decisions to deploy a digital technology solution.

A plausible approach would be to seek to mitigate the risk of insufficient access to in-house skills through eg the creation of additional, standby or redundant contracted capability, but this would come with its own costs and governance challenges. Moreover, the added complication is that the digital skills gap that exposes the organisation to these risks in the first place, can also fuel a dynamic of further reliance on outside capabilities (from consultancy firms) beyond the development and adoption of those digital solutions. This has the potential to exacerbate the long-term erosion of the skills base in the public sector. Digitalisation heightens the need for the public sector to build up its expertise and skills, as the only way of slowing down or reducing the widening digital skills gap and ensuring organisational resilience and a sustainable digital transition.

Conclusion

Public buyers already face significant digital governance obligations, and those and the underlying risks can only increase (potentially, very significantly) with further progress in the path of procurement digitalisation. Ultimately, to ensure adequate digital procurement governance, it is not only necessary to take a realistic look at the potential of the technology and the required enabling factors (see here), but also to embed a comprehensive mechanism of risk assessment in the process of technological adoption, which requires enhanced public sector digital capabilities, as stressed here. Such an approach can mitigate against the policy irresistibility that surrounds these technologies (see here) and contribute to a gradual and sustainable process of procurement digitalisation. The ways in which such risk assessment should be carried out require further exploration, including consideration of whether to subject the adoption of digital technologies for procurement governance to external checks (see here). This will be the object of forthcoming analysis.

Public procurement and [AI] source code transparency, a (downstream) competition issue (re C-796/18)

Two years ago, in its Judgment of 28 May 2020 in case C-796/18, Informatikgesellschaft für Software-Entwicklung, EU:C:2020:395 (the ‘ISE case’), the Court of Justice of the European Union (CJEU) answered a preliminary ruling that can have very significant impacts in the artificial intelligence (AI) space, despite it being concerned with ‘old school’ software. More generally, the ISE case set the requirements to ensure that a contracting authority does not artificially distort competition for public contracts concerning (downstream) software services generally, and I argue AI services in particular.

The case risks going unnoticed because it concerned a relatively under-discussed form of self-organisation by the public administration that is exempted from the procurement rules (i.e. public-public cooperation; on that dimension of the case, see W Janssen, ‘Article 12’ in R Caranta and A Sanchez-Graells, European Public Procurement. Commentary on Directive 2014/24/EU (EE 2021) 12.57 and ff). It is thus worth revisiting the case and considering how it squares with regulatory developments concerning the procurement of AI, such as the development of standard clauses under the auspices of the European Commission.

The relevant part of the ISE case

In the ISE case, one of the issues at stake concerned whether a contracting authority would be putting an economic operator (i.e. the software developer) in a position of advantage vis-à-vis its competitors by accepting the transfer of software free of charge from another contracting authority, conditional on undertaking to further develop that software and to share (also free of charge) those developments of the software with the entity from which it had received it.

The argument would be that by simply accepting the software, the receiving contracting authority would be advantaging the software publisher because ‘in practice, the contracts for the adaptation, maintenance and development of the base software are reserved exclusively for the software publisher since its development requires not only the source code for the software but also other knowledge relating to the development of the source code’ (C-796/18, para 73).

This is an important issue because it primarily concerns how to deal with incumbency (and IP) advantages in software-related procurement. The CJEU, in the context of the exemption for public-public cooperation regulated in Article 12 of Directive 2014/24/EU, established that

in order to ensure compliance with the principles of public procurement set out in Article 18 of Directive 2014/24 … first [the collaborating contracting authorities must] have the source code for the … software, second, that, in the event that they organise a public procurement procedure for the maintenance, adaptation or development of that software, those contracting authorities communicate that source code to potential candidates and tenderers and, third, that access to that source code is in itself a sufficient guarantee that economic operators interested in the award of the contract in question are treated in a transparent manner, equally and without discrimination (para 75).

Functionally, in my opinion, there is no reason to limit that three-pronged test to the specific context of public-public cooperation and, in my view, the CJEU position is generalisable as the relevant test to ensure that there is no artificial narrowing of competition in the tendering of software contracts due to incumbency advantage.

Implications of the ISE case

What this means is that, functionally, contracting authorities are under an obligation to ensure that they have access and dissemination rights over the source code, at the very least for the purposes of re-tendering the contract, or tendering ancillary contracts. More generally, they also need to have a sufficient understanding of the software — or technical documentation enabling that knowledge — so that they can share it with potential tenderers and in that manner ensure that competition is not artificially distorted.

All of this is of high relevance and importance in the context of emerging practices of AI procurement. The debates around AI transparency are in large part driven by issues of commercial opacity/protection of business secrets, in particular of the source code, which both makes it difficult to justify the deployment of the AI in the public sector (for, let’s call them, due process and governance reasons demanding explainability) and also to manage its procurement and its propagation within the public sector (e.g. as a result of initiatives such as ‘buy once, use many times’ or collaborative and joint approaches to the procurement of AI, which are seen as strategically significant).

While there is a movement towards requiring source code transparency (e.g. but not necessarily by using open source solutions), this is not at all mainstreamed in policy-making. For example, the pilot UK algorithmic transparency standard does not mention source code. Short of future rules demanding source code transparency, which seem unlikely (see e.g. the approach in the proposed EU AI Act, Art 70), this issue will remain one for contractual regulation and negotiations. And contracts are likely to follow the approach of the general rules.

For example, in the proposal for standard contractual clauses for the procurement of AI by public organisations being developed under the auspices of the European Commission and on the basis of the experience of the City of Amsterdam, access to source code is presented as an optional contractual requirement on transparency (Art 6):

<optional> Without prejudice to Article 4, the obligations referred to in article 6.2 and article 6.3 [on assistance to explain an AI-generated decision] include the source code of the AI System, the technical specifications used in developing the AI System, the Data Sets, technical information on how the Data Sets used in developing the AI System were obtained and edited, information on the method of development used and the development process undertaken, substantiation of the choice for a particular model and its parameters, and information on the performance of the AI System.

For the reasons above, I would argue that a clause such as that one is not at all voluntary, but a basic requirement in the procurement of AI if the contracting authority is to be able to legally discharge its obligations under EU public procurement law going forward. And given the uncertainty on the future development, integration or replacement of AI solutions at the time of procuring them, this seems an unavoidable issue in all cases of AI procurement.

Let’s see if the CJEU is confronted with a similar issue, or the need to ascertain the value of access to data as ‘pecuniary interest’ (which I think, on the basis of a different part of the ISE case, is clearly to be answered in the positive) any time soon.