Some thoughts on the US' Executive Order on the Safe, Secure, and Trustworthy Development and Use of AI

On 30 October 2023, President Biden adopted the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (the ‘AI Executive Order’, see also its Factsheet). The use of AI by the US Federal Government is an important focus of the AI Executive Order. It will be subject to a new governance regime detailed in the Draft Policy on the use of AI in the Federal Government (the ‘Draft AI in Government Policy’, see also its Factsheet), which is open for comment until 5 December 2023. Here, I reflect on these documents from the perspective of AI procurement as a major plank of this governance reform.

Procurement in the AI Executive Order

Section 2 of the AI Executive Order formulates eight guiding principles and priorities in advancing and governing the development and use of AI. Section 2(g) refers to AI risk management, and states that

It is important to manage the risks from the Federal Government’s own use of AI and increase its internal capacity to regulate, govern, and support responsible use of AI to deliver better results for Americans. These efforts start with people, our Nation’s greatest asset. My Administration will take steps to attract, retain, and develop public service-oriented AI professionals, including from underserved communities, across disciplines — including technology, policy, managerial, procurement, regulatory, ethical, governance, and legal fields — and ease AI professionals’ path into the Federal Government to help harness and govern AI. The Federal Government will work to ensure that all members of its workforce receive adequate training to understand the benefits, risks, and limitations of AI for their job functions, and to modernize Federal Government information technology infrastructure, remove bureaucratic obstacles, and ensure that safe and rights-respecting AI is adopted, deployed, and used.

Section 10 then establishes specific measures to advance Federal Government use of AI. Section 10.1(b) details a set of governance reforms to be implemented in view of the Director of the Office of Management and Budget (OMB)’s guidance to strengthen the effective and appropriate use of AI, advance AI innovation, and manage risks from AI in the Federal Government. Section 10.1(b) includes the following (emphases added):

The Director of OMB’s guidance shall specify, to the extent appropriate and consistent with applicable law:

(i) the requirement to designate at each agency within 60 days of the issuance of the guidance a Chief Artificial Intelligence Officer who shall hold primary responsibility in their agency, in coordination with other responsible officials, for coordinating their agency’s use of AI, promoting AI innovation in their agency, managing risks from their agency’s use of AI …;

(ii) the Chief Artificial Intelligence Officers’ roles, responsibilities, seniority, position, and reporting structures;

(iii) for [covered] agencies […], the creation of internal Artificial Intelligence Governance Boards, or other appropriate mechanisms, at each agency within 60 days of the issuance of the guidance to coordinate and govern AI issues through relevant senior leaders from across the agency;

(iv) required minimum risk-management practices for Government uses of AI that impact people’s rights or safety, including, where appropriate, the following practices derived from OSTP’s Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework: conducting public consultation; assessing data quality; assessing and mitigating disparate impacts and algorithmic discrimination; providing notice of the use of AI; continuously monitoring and evaluating deployed AI; and granting human consideration and remedies for adverse decisions made using AI;

(v) specific Federal Government uses of AI that are presumed by default to impact rights or safety;

(vi) recommendations to agencies to reduce barriers to the responsible use of AI, including barriers related to information technology infrastructure, data, workforce, budgetary restrictions, and cybersecurity processes;

(vii) requirements that [covered] agencies […] develop AI strategies and pursue high-impact AI use cases;

(viii) in consultation with the Secretary of Commerce, the Secretary of Homeland Security, and the heads of other appropriate agencies as determined by the Director of OMB, recommendations to agencies regarding:

(A) external testing for AI, including AI red-teaming for generative AI, to be developed in coordination with the Cybersecurity and Infrastructure Security Agency;

(B) testing and safeguards against discriminatory, misleading, inflammatory, unsafe, or deceptive outputs, as well as against producing child sexual abuse material and against producing non-consensual intimate imagery of real individuals (including intimate digital depictions of the body or body parts of an identifiable individual), for generative AI;

(C) reasonable steps to watermark or otherwise label output from generative AI;

(D) application of the mandatory minimum risk-management practices defined under subsection 10.1(b)(iv) of this section to procured AI;

(E) independent evaluation of vendors’ claims concerning both the effectiveness and risk mitigation of their AI offerings;

(F) documentation and oversight of procured AI;

(G) maximizing the value to agencies when relying on contractors to use and enrich Federal Government data for the purposes of AI development and operation;

(H) provision of incentives for the continuous improvement of procured AI; and

(I) training on AI in accordance with the principles set out in this order and in other references related to AI listed herein; and

(ix) requirements for public reporting on compliance with this guidance.

Section 10.1(b) of the AI Executive Order establishes two sets or types of requirements.

First, there are internal governance requirements and these revolve around the appointment of Chief Artificial Intelligence Officers (CAIOs), AI Governance Boards, their roles, and support structures. This set of requirements seeks to strengthen the ability of Federal Agencies to understand AI and to provide effective safeguards in its governmental use. The crucial set of substantive protections from this internal perspective derives from the required minimum risk-management practices for Government uses of AI, which is directly placed under the responsibility of the relevant CAIO.

Second, there are external (or relational) governance requirements that revolve around the agency’s ability to control and challenge tech providers. This involves the transfer (back to back) of minimum risk-management practices to AI contractors, but also includes commercial considerations. The tone of the Executive Order indicates that this set of requirements is meant to neutralise risks of commercial capture and commercial determination by imposing oversight and external verification. From an AI procurement governance perspective, the requirements in Section 10.1(b)(viii) are particularly relevant. As some of those requirements will need further development with a view to their operationalisation, Section 10.1(d)(ii) of the AI Executive Order requires the Director of OMB to develop an initial means to ensure that agency contracts for the acquisition of AI systems and services align with its Section 10.1(b) guidance.

Procurement in the Draft AI in Government Policy

The guidance required by Section 10.1(b) of the AI Executive Order has been formulated in the Draft AI in Government Policy, which offers more detail on the relevant governance mechanisms and the requirements for AI procurement. Section 5 on managing risks from the use of AI is particularly relevant from an AI procurement perspective. While Section 5(d) refers explicitly to managing risks in AI procurement, given that the primary substantive obligations will arise from the need to comply with the required minimum risk-management practices for Government uses of AI, this specific guidance needs to be read in the broader context of AI risk-management within Section 5 of the Draft AI in Government Policy.

Scope

The Draft AI in Government Policy relies on a tiered approach to AI risk by imposing specific obligations in relation to safety-impacting and rights-impacting AI only. This is an important element of the policy because these two categories are defined (in Section 6) and in principle will cover pre-established lists of AI use, based on a set of presumptions (Section 5(b)(i) and (ii)). However, CAIOs will be able to waive the application of minimum requirements for specific AI uses where, ‘based upon a system-specific risk assessment, [it is shown] that fulfilling the requirement would increase risks to safety or rights overall or would create an unacceptable impediment to critical agency operations‘ (Section 5(c)(iii)). Therefore, these are not closed lists and the specific scope of coverage of the policy will vary with such determinations. There are also some exclusions from minimum requirements where the AI is used for narrow purposes (Section 5(c)(i))—notably the ‘Evaluation of a potential vendor, commercial capability, or freely available AI capability that is not otherwise used in agency operations, solely for the purpose of making a procurement or acquisition decision’; AI evaluation in the context of regulatory enforcement, law enforcement or national security action; or research and development.

This scope of the policy may be under-inclusive, or generate risks of under-inclusiveness at the boundary, in two respects. First, the way AI is defined for the purposes of the Draft AI in Government Policy, excludes ‘robotic process automation or other systems whose behavior is defined only by human-defined rules or that learn solely by repeating an observed practice exactly as it was conducted’ (Section 6). This could be under-inclusive to the extent that the minimum risk-management practices for Government uses of AI create requirements that are not otherwise applicable to Government use of (non-AI) algorithms. There is a commonality of risks (eg discrimination, data governance risks) that would be better managed if there was a joined up approach. Moreover, developing minimum practices in relation to those means of automation would serve to develop institutional capability that could then support the adoption of AI as defined in the policy. Second, the variability in coverage stemming from consideration of ‘unacceptable impediments to critical agency operations‘ opens the door to potentially problematic waivers. While these are subject to disclosure and notification to OMB, it is not entirely clear on what grounds OMB could challenge those waivers. This is thus an area where the guidance may require further development.

extensions and waivers

In relation to covered safety-impacting or rights-impacting AI (as above), Section 5(a)(i) establishes the important principle that US Federal Government agencies have until 1 August 2024 to implement the minimum practices in Section 5(c), ‘or else stop using any AI that is not compliant with the minimum practices’. This type of sunset clause concerning the currently implicit authorisation for the use of AI is a potentially powerful mechanism. However, the Draft also establishes that such obligation to discontinue non-compliant AI use must be ‘consistent with the details and caveats in that section [5(c)]’, which includes the possibility, until 1 August 2024, for agencies to

request from OMB an extension of limited and defined duration for a particular use of AI that cannot feasibly meet the minimum requirements in this section by that date. The request must be accompanied by a detailed justification for why the agency cannot achieve compliance for the use case in question and what practices the agency has in place to mitigate the risks from noncompliance, as well as a plan for how the agency will come to implement the full set of required minimum practices from this section.

Again, the guidance does not detail on what grounds OMB would grant those extensions or how long they would be for. There is a clear interaction between the extension and waiver mechanism. For example, an agency that saw its request for an extension declined could try to waive that particular AI use—or agencies could simply try to waive AI uses rather than applying for extensions, as the requirements for a waiver seem to be rather different (and potentially less demanding) than those applicable to a waiver. In that regard, it seems that waiver determinations are ‘all or nothing’, whereas the system could be more flexible (and protective) if waiver decisions not only needed to explain why meeting the minimum requirements would generate the heightened overall risks or pose such ‘unacceptable impediments to critical agency operations‘, but also had to meet the lower burden of mitigation currently expected in extension applications, concerning detailed justification for what practices the agency has in place to mitigate the risks from noncompliance where they can be partly mitigated. In other words, it would be preferable to have a more continuous spectrum of mitigation measures in the context of waivers as well.

general minimum practices

Both in relation to safety- and rights-impact AI uses, the Draft AI in Government Policy would require agencies to engage in risk management both before and while using AI.

Preventative measures include:

  • completing an AI Impact Assessment documenting the intended purpose of the AI and its expected benefit, the potential risks of using AI, and and analysis of the quality and appropriateness of the relevant data;

  • testing the AI for performance in a real-world context—that is, testing under conditions that ‘mirror as closely as possible the conditions in which the AI will be deployed’; and

  • independently evaluate the AI, with the particularly important requirement that ‘The independent reviewing authority must not have been directly involved in the system’s development.’ In my view, it would also be important for the independent reviewing authority not to be involved in the future use of the AI, as its (future) operational interest could also be a source of bias in the testing process and the analysis of its results.

In-use measures include:

  • conducting ongoing monitoring and establish thresholds for periodic human review, with a focus on monitoring ‘degradation to the AI’s functionality and to detect changes in the AI’s impact on rights or safety’—‘human review, including renewed testing for performance of the AI in a real-world context, must be conducted at least annually, and after significant modifications to the AI or to the conditions or context in which the AI is used’;

  • mitigating emerging risks to rights and safety—crucially, ‘Where the AI’s risks to rights or safety exceed an acceptable level and where mitigation is not practicable, agencies must stop using the affected AI as soon as is practicable’. In that regard, the draft indicates that ‘Agencies are responsible for determining how to safely decommission AI that was already in use at the time of this memorandum’s release without significant disruptions to essential government functions’, but it would seem that this is also a process that would benefit from close oversight by OMB as it would otherwise jeopardise the effectiveness of the extension and waiver mechanisms discussed above—in which case additional detail in the guidance would be required;

  • ensuring adequate human training and assessment;

  • providing appropriate human consideration as part of decisions that pose a high risk to rights or safety; and

  • providing public notice and plain-language documentation through the AI use case inventory—however, this is subject a large number of caveats (notice must be ‘consistent with applicable law and governmentwide guidance, including those concerning protection of privacy and of sensitive law enforcement, national security, and other protected information’) and more detailed guidance on how to assess these issues would be welcome (if it exists, a cross-reference in the draft policy would be helpful).

additional minimum practices for rights-impacting ai

In relation to rights-affecting AI only, the Draft AI in Government Policy would require agencies to take additional measures.

Preventative measures include:

  • take steps to ensure that the AI will advance equity, dignity, and fairness—including proactively identifying and removing factors contributing to algorithmic discrimination or bias; assessing and mitigating disparate impacts; and using representative data; and

  • consult and incorporate feedback from affected groups.

In-use measures include:

  • conducting ongoing monitoring and mitigation for AI-enabled discrimination;

  • notifying negatively affected individuals—this is an area where the draft guidance is rather woolly, as it also includes a set of complex caveats, as individual notice that ‘AI meaningfully influences the outcome of decisions specifically concerning them, such as the denial of benefits’ must only be given ‘[w]here practicable and consistent with applicable law and governmentwide guidance’. Moreover, the draft only indicates that ‘Agencies are also strongly encouraged to provide explanations for such decisions and actions’, but not required to. In my view, this tackles two of the most important implications for individuals in Government use of AI: the possibility to understand why decisions are made (reason giving duties) and the burden of challenging automated decisions, which is increased if there is a lack of transparency on the automation. Therefore, on this point, the guidance seems too tepid—especially bearing in mind that this requirement only applies to ‘AI whose output serves as a basis for decision or action that has a legal, material, or similarly significant effect on an individual’s’ civil rights, civil liberties, or privacy; equal opportunities; or access to critical resources or services. In these cases, it seems clear that notice and explainability requirements need to go further.

  • maintaining human consideration and remedy processes—including ‘potential remedy to the use of the AI by a fallback and escalation system in the event that an impacted individual would like to appeal or contest the AI’s negative impacts on them. In developing appropriate remedies, agencies should follow OMB guidance on calculating administrative burden and the remedy process should not place unnecessary burden on the impacted individual. When law or governmentwide guidance precludes disclosure of the use of AI or an opportunity for an individual appeal, agencies must create appropriate mechanisms for human oversight of rights-impacting AI’. This is another crucial area concerning rights not to be subjected to fully-automated decision-making where there is no meaningful remedy. This is also an area of the guidance that requires more detail, especially as to what is the adequate balance of burdens where eg the agency can automate the undoing of negative effects on individuals identified as a result of challenges by other individuals or in the context of the broader monitoring of the functioning and effects of the rights-impacting AI. In my view, this would be an opportunity to mandate automation of remediation in a meaningful way.

  • maintaining options to opt-out where practicable.

procurement related practices

In addition to the need for agencies to be able to meet the above requirements in relation to procured AI—which will in itself create the need to cascade some of the requirements down to contractors, and which will be the object of future guidance on how to ensure that AI contracts align with the requirements—the Draft AI in Government Policy also requires that agencies procuring AI manage risks by:

  • aligning to National Values and Law by ensuring ‘that procured AI exhibits due respect for our Nation’s values, is consistent with the Constitution, and complies with all other applicable laws, regulations, and policies, including those addressing privacy, confidentiality, copyright, human and civil rights, and civil liberties’;

  • taking ‘steps to ensure transparency and adequate performance for their procured AI, including by: obtaining adequate documentation of procured AI, such as through the use of model, data, and system cards; regularly evaluating AI-performance claims made by Federal contractors, including in the particular environment where the agency expects to deploy the capability; and considering contracting provisions that incentivize the continuous improvement of procured AI’;

  • taking ‘appropriate steps to ensure that Federal AI procurement practices promote opportunities for competition among contractors and do not improperly entrench incumbents. Such steps may include promoting interoperability and ensuring that vendors do not inappropriately favor their own products at the expense of competitors’ offering’;

  • maximizing the value of data for AI; and

  • responsibly procuring Generative AI.

These high level requirements are well targeted and compliance with them would go a long way to fostering ‘responsible AI procurement’ through adequate risk mitigation in ways that still allow the procurement mechanism to harness market forces to generate value for money.

However, operationalising these requirements will be complex and the further OMB guidance should be rather detailed and practical.

Final thoughts

In my view, the AI Executive Order and the Draft AI in Government Policy lay the foundations for a significant strengthening of the governance of AI procurement with a view to embedding safeguards in public sector AI use. A crucially important characteristic in the design of these governance mechanisms is that it imposes significant duties on the agencies seeking to procure and use the AI, and it explicitly seeks to address risks of commercial capture and commercial determination. Another crucially important characteristic is that, at least in principle, use of AI is made conditional on compliance with a rather comprehensive set of preventative and in-use risk mitigation measures. The general aspects of this governance approach thus offer a very valuable blueprint for other jurisdictions considering how to boost AI procurement governance.

However, as always, the devil is in the details. One of the crucial risks in this approach to AI governance concerns a lack of independence of the entities making the relevant assessments. In the Draft AI in Government Policy, there are some risks of under-inclusion and/or excessive waivers of compliance with the relevant requirements (both explicit and implicit, through protracted processes of decommissioning of non-compliant AI), as well as a risk that ‘practical considerations’ will push compliance with the risk mitigation requirements well past the (ambitious) 1 August 2024 deadline through long or rolling extensions.

To mitigate for this, the guidance should be much clearer on the role of OMB in extension, waiver and decommissioning decisions, as well as in relation to the specific criteria and limits that should form part of those decisions. Only by ensuring adequate OMB intervention can a system of governance that still does not entirely (organisationally) separate procurement, use and oversight decisions reach the levels of independent verification required not only to neutralise commercial determination, but also operational dependency and the ‘policy irresistibility’ of digital technologies.

UK's 'pro-innovation approach' to AI regulation won't do, particularly for public sector digitalisation

Regulating artificial intelligence (AI) has become the challenge of the time. This is a crucial area of regulatory development and there are increasing calls—including from those driving the development of AI—for robust regulatory and governance systems. In this context, more details have now emerged on the UK’s approach to AI regulation.

Swimming against the tide, and seeking to diverge from the EU’s regulatory agenda and the EU AI Act, the UK announced a light-touch ‘pro-innovation approach’ in its July 2022 AI regulation policy paper. In March 2023, the same approach was supported by a Report of the Government Chief Scientific Adviser (the ‘GCSA Report’), and is now further developed in the White Paper ‘AI regulation: a pro-innovation approach’ (the ‘AI WP’). The UK Government has launched a public consultation that will run until 21 June 2023.

Given the relevance of the issue, it can be expected that the public consultation will attract a large volume of submissions, and that the ‘pro-innovation approach’ will be heavily criticised. Indeed, there is an on-going preparatory Parliamentary Inquiry on the Governance of AI that has already collected a wealth of evidence exploring the pros and cons of the regulatory approach outlined there. Moreover, initial reactions eg by the Public Law Project, the Ada Lovelace Institute, or the Royal Statistical Society have been (to different degrees) critical of the lack of regulatory ambition in the AI WP—while, as could be expected, think tanks closely linked to the development of the policy, such as the Alan Turing Institute, have expressed more positive views.

Whether the regulatory approach will shift as a result of the expected pushback is unclear. However, given that the AI WP follows the same deregulatory approach first suggested in 2018 and is strongly politically/policy entrenched—for the UK Government has self-assessed this approach as ‘world leading’ and claims it will ‘turbocharge economic growth’—it is doubtful that much will necessarily change as a result of the public consultation.

That does not mean we should not engage with the public consultation, but the opposite. In the face of the UK Government’s dereliction of duty, or lack of ideas, it is more important than ever that there is a robust pushback against the deregulatory approach being pursued. Especially in the context of public sector digitalisation and the adoption of AI by the public administration and in the provision of public services, where the Government (unsurprisingly) is unwilling to create regulatory safeguards to protect citizens from its own action.

In this blogpost, I sketch my main areas of concern with the ‘pro-innovation approach’ in the GCSA Report and AI WP, which I will further develop for submission to the public consultation, building on earlier views. Feedback and comments would be gratefully received: a.sanchez-graells@bristol.ac.uk.

The ‘pro-innovation approach’ in the GCSA Report — squaring the circle?

In addition to proposals on the intellectual property (IP) regulation of generative AI, the opening up of public sector data, transport-related, or cyber security interventions, the GCSA Report focuses on ‘core’ regulatory and governance issues. The report stresses that regulatory fragmentation is one of the key challenges, as is the difficulty for the public sector in ‘attracting and retaining individuals with relevant skills and talent in a competitive environment with the private sector, especially those with expertise in AI, data analytics, and responsible data governance‘ (at 5). The report also further hints at the need to boost public sector digital capabilities by stressing that ‘the government and regulators should rapidly build capability and know-how to enable them to positively shape regulatory frameworks at the right time‘ (at 13).

Although the rationale is not very clearly stated, to bridge regulatory fragmentation and facilitate the pooling of digital capabilities from across existing regulators, the report makes a central proposal to create a multi-regulator AI sandbox (at 6-8). The report suggests that it could be convened by the Digital Regulatory Cooperation Forum (DRCF)—which brings together four key regulators (the Information Commissioner’s Office (ICO), Office of Communications (Ofcom), the Competition and Markets Authority (CMA) and the Financial Conduct Authority (FCA))—and that DRCF should look at ways of ‘bringing in other relevant regulators to encourage join up’ (at 7).

The report recommends that the AI sandbox should operate on the basis of a ‘commitment from the participant regulators to make joined-up decisions on regulations or licences at the end of each sandbox process and a clear feedback loop to inform the design or reform of regulatory frameworks based on the insights gathered. Regulators should also collaborate with standards bodies to consider where standards could act as an alternative or underpin outcome-focused regulation’ (at 7).

Therefore, the AI sandbox would not only be multi-regulator, but also encompass (in some way) standard-setting bodies (presumably UK ones only, though), without issues of public-private interaction in decision-making implying the exercise of regulatory public powers, or issues around regulatory capture and risks of commercial determination, being considered at all. The report in general is extremely industry-orientated, eg in stressing in relation to the overarching pacing problem that ‘for emerging digital technologies, the industry view is clear: there is a greater risk from regulating too early’ (at 5), without this being in any way balanced with clear (non-industry) views that the biggest risk is actually in regulating too late and that we are collectively frog-boiling into a ‘runaway AI’ fiasco.

Moreover, confusingly, despite the fact that the sandbox would be hosted by DRCF (of which the ICO is a leading member), the GCSA Report indicates that the AI sandbox ‘could link closely with the ICO sandbox on personal data applications’ (at 8). The fact that the report is itself unclear as to whether eg AI applications with data protection implications should be subjected to one or two sandboxes, or the extent to which the general AI sandbox would need to be integrated with sectoral sandboxes for non-AI regulatory experimentation, already indicates the complexity and dubious practical viability of the suggested approach.

It is also unclear why multiple sector regulators should be involved in any given iteration of a single AI sandbox where there may be no projects within their regulatory remit and expertise. The alternative approach of having an open or rolling AI sandbox mechanism led by a single AI authority, which would then draw expertise and work in collaboration with the relevant sector regulator as appropriate on a per-project basis, seems preferable. While some DRCF members could be expected to have to participate in a majority of sandbox projects (eg CMA and ICO), others would probably have a much less constant presence (eg Ofcom, or certainly the FCA).

Remarkably, despite this recognition of the functional need for a centralised regulatory approach and a single point of contact (primarily for industry’s convenience), the GCSA Report implicitly supports the 2022 AI regulation policy paper’s approach to not creating an overarching cross-sectoral AI regulator. The GCSA Report tries to create a ‘non-institutionalised centralised regulatory function’, nested under DRCF. In practice, however, implementing the recommendation for a single AI sandbox would create the need for the further development of the governance structures of the DRCF (especially if it was to grow by including many other sectoral regulators), or whichever institution ‘hosted it’, or else risk creating a non-institutional AI regulator with the related difficulties in ensuring accountability. This would add a layer of deregulation to the deregulatory effect that the sandbox itself creates (see eg Ranchordas (2021)).

The GCSA Report seems to try to square the circle of regulatory fragmentation by relying on cooperation as a centralising regulatory device, but it does this solely for the industry’s benefit and convenience, without paying any consideration to the future effectiveness of the regulatory framework. This is hard to understand, given the report’s identification of conflicting regulatory constraints, or in its terminology ‘incentives’: ‘The rewards for regulators to take risks and authorise new and innovative products and applications are not clear-cut, and regulators report that they can struggle to trade off the different objectives covered by their mandates. This can include delivery against safety, competition objectives, or consumer and environmental protection, and can lead to regulator behaviour and decisions that prioritise further minimising risk over supporting innovation and investment. There needs to be an appropriate balance between the assessment of risk and benefit’ (at 5).

This not only frames risk-minimisation as a negative regulatory outcome (and further feeds into the narrative that precautionary regulatory approaches are somehow not legitimate because they run against industry goals—which deserves strong pushback, see eg Kaminski (2022)), but also shows a main gap in the report’s proposal for the single AI sandbox. If each regulator has conflicting constraints, what evidence (if any) is there that collaborative decision-making will reduce, rather than exacerbate, such regulatory clashes? Are decisions meant to be arrived at by majority voting or in any other way expected to deactivate (some or most) regulatory requirements in view of (perceived) gains in relation to other regulatory goals? Why has there been no consideration of eg the problems encountered by concurrency mechanisms in the application of sectoral and competition rules (see eg Dunne (2014), (2020) and (2021)), as an obvious and immediate precedent of the same type of regulatory coordination problems?

The GCSA report also seems to assume that collaboration through the AI sandbox would be resource neutral for participating regulators, whereas it seems reasonable to presume that this additional layer of regulation (even if not institutionalised) would require further resources. And, in any case, there does not seem to be much consideration as to the viability of asking of resource-strapped regulators to create an AI sandbox where they can (easily) be out-skilled and over-powered by industry participants.

In my view, the GCSA Report already points at significant weaknesses in the resistance to creating any new authorities, despite the obvious functional need for centralised regulation, which is one of the main weaknesses, or the single biggest weakness, in the AI WP—as well as in relation to a lack of strategic planning around public sector digital capabilities, despite well-recognised challenges (see eg Committee of Public Accounts (2021)).

The ‘pro-innovation approach’ in the AI WP — a regulatory blackhole, privatisation of ai regulation, or both

The AI WP envisages an ‘innovative approach to AI regulation [that] uses a principles-based framework for regulators to interpret and apply to AI within their remits’ (para 36). It expects the framework to ‘pro-innovation, proportionate, trustworthy, adaptable, clear and collaborative’ (para 37). As will become clear, however, such ‘innovative approach’ solely amounts to the formulation of high-level, broad, open-textured and incommensurable principles to inform a soft law push to the development of regulatory practices aligned with such principles in a highly fragmented and incomplete regulatory landscape.

The regulatory framework would be built on four planks (para 38): [i] an AI definition (paras 39-42); [ii] a context-specific approach (ie a ‘used-based’ approach, rather than a ‘technology-led’ approach, see paras 45-47); [iii] a set of cross-sectoral principles to guide regulator responses to AI risks and opportunities (paras 48-54); and [iv] new central functions to support regulators to deliver the AI regulatory framework (paras 70-73). In reality, though, there will be only two ‘pillars’ of the regulatory framework and they do not involve any new institutions or rules. The AI WP vision thus largely seems to be that AI can be regulated in the UK in a world-leading manner without doing anything much at all.

AI Definition

The UK’s definition of AI will trigger substantive discussions, especially as it seeks to build it around ‘the two characteristics that generate the need for a bespoke regulatory response’: ‘adaptivity’ and ‘autonomy’ (para 39). Discussing the definitional issue is beyond the scope of this post but, on the specific identification of the ‘autonomy’ of AI, it is worth highlighting that this is an arguably flawed regulatory approach to AI (see Soh (2023)).

No new institutions

The AI WP makes clear that the UK Government has no plans to create any new AI regulator, either with a cross-sectoral (eg general AI authority) or sectoral remit (eg an ‘AI in the public sector authority’, as I advocate for). The Ministerial Foreword to the AI WP already stresses that ‘[t]o ensure our regulatory framework is effective, we will leverage the expertise of our world class regulators. They understand the risks in their sectors and are best placed to take a proportionate approach to regulating AI’ (at p2). The AI WP further stresses that ‘[c]reating a new AI-specific, cross-sector regulator would introduce complexity and confusion, undermining and likely conflicting with the work of our existing expert regulators’ (para 47). This however seems to presume that a new cross-sector AI regulator would be unable to coordinate with existing regulators, despite the institutional architecture of the regulatory framework foreseen in the AI WP entirely relying on inter-regulator collaboration (!).

No new rules

There will also not be new legislation underpinning regulatory activity, although the Government claims that the WP AI, ‘alongside empowering regulators to take a lead, [is] also setting expectations‘ (at p3). The AI WP claims to develop a regulatory framework underpinned by five principles to guide and inform the responsible development and use of AI in all sectors of the economy: [i] Safety, security and robustness; [ii] Appropriate transparency and explainability; [iii] Fairness; [iv] Accountability and governance; and [v] Contestability and redress (para 10). However, they will not be put on a statutory footing (initially); ‘the principles will be issued on a non-statutory basis and implemented by existing regulators’ (para 11). While there is some detail on the intended meaning of these principles (see para 52 and Annex A), the principles necessarily lack precision and, worse, there is a conflation of the principles with other (existing) regulatory requirements.

For example, it is surprising that the AI WP describes fairness as implying that ‘AI systems should (sic) not undermine the legal rights of individuals or organisations, discriminate unfairly against individuals or create unfair market outcomes‘ (emphasis added), and stresses the expectation ‘that regulators’ interpretations of fairness will include consideration of compliance with relevant law and regulation’ (para 52). This encapsulates the risks that principles-based AI regulation ends up eroding compliance with and enforcement of current statutory obligations. A principle of AI fairness cannot modify or exclude existing legal obligations, and it should not risk doing so either.

Moreover, the AI WP suggests that, even if the principles are supported by a statutory duty for regulators to have regard to them, ‘while the duty to have due regard would require regulators to demonstrate that they had taken account of the principles, it may be the case that not every regulator will need to introduce measures to implement every principle’ (para 58). This conflates two issues. On the one hand, the need for activity subjected to regulatory supervision to comply with all principles and, on the other, the need for a regulator to take corrective action in relation to any of the principles. It should be clear that regulators have a duty to ensure that all principles are complied with in their regulatory remit, which does not seem to entirely or clearly follow from the weaker duty to have due regard to the principles.

perpetuating regulatory gaps, in particular regarding public sector digitalisation

As a consequence of the lack of creation of new regulators and the absence of new legislation, it is unclear whether the ‘regulatory strategy’ in the AI WP will have any real world effects within existing regulatory frameworks, especially as the most ambitious intervention is to create ‘a statutory duty on regulators requiring them to have due regard to the principles’ (para 12)—but the Government may decide not to introduce it if ‘monitoring of the effectiveness of the initial, non-statutory framework suggests that a statutory duty is unnecessary‘ (para 59).

However, what is already clear that there is no new AI regulation in the horizon despite the fact that the AI WP recognises that ‘some AI risks arise across, or in the gaps between, existing regulatory remits‘ (para 27), that ‘there may be AI-related risks that do not clearly fall within the remits of the UK’s existing regulators’ (para 64), and the obvious and worrying existence of high risks to fundamental rights and values (para 4 and paras 22-25). The AI WP is naïve, to say the least, in setting out that ‘[w]here prioritised risks fall within a gap in the legal landscape, regulators will need to collaborate with government to identify potential actions. This may include identifying iterations to the framework such as changes to regulators’ remits, updates to the Regulators’ Code, or additional legislative intervention’ (para 65).

Hoping that such risk identification and gap analysis will take place without assigning specific responsibility for it—and seeking to exempt the Government from such responsibility—seems a bit too much to ask. In fact, this is at odds with the graphic depiction of how the AI WP expects the system to operate. As noted in (1) in the graph below, it is clear that the identification of risks that are cross-cutting or new (unregulated) risks that warrant intervention is assigned to a ‘central risk function’ (more below), not the regulators. Importantly, the AI WP indicates that such central function ‘will be provided from within government’ (para 15 and below). Which then raises two questions: (a) who will have the responsibility to proactively screen for such risks, if anyone, and (b) how has the Government not already taken action to close the gaps it recognises exists in the current legal landscape?

AI WP Figure 2: Central risks function activities.

This perpetuates the current regulatory gaps, in particular in sectors without a regulator or with regulators with very narrow mandates—such as the public sector and, to a large extent, public services. Importantly, this approach does not create any prohibition of impermissible AI uses, nor sets any (workable) set of minimum requirements for the deployment of AI in high-risk uses, specially in the public sector. The contrast with the EU AI Act could not be starker and, in this aspect in particular, UK citizens should be very worried that the UK Government is not committing to any safeguards in the way technology can be used in eg determining access to public services, or by the law enforcement and judicial system. More generally, it is very worrying that the AI WP does not foresee any safeguards in relation to the quickly accelerating digitalisation of the public sector.

Loose central coordination leading to ai regulation privatisation

Remarkably, and in a similar functional disconnect as that of the GCSA Report (above), the decision not to create any new regulator/s (para 15) is taken in the same breath as the AI WP recognises that the small coordination layer within the regulatory architecture proposed in the 2022 AI regulation policy paper (ie, largely, the approach underpinning the DRCF) has been heavily criticised (para 13). The AI WP recognises that ‘the DRCF was not created to support the delivery of all the functions we have identified or the implementation of our proposed regulatory framework for AI’ (para 74).

The AI WP also stresses how ‘[w]hile some regulators already work together to ensure regulatory coherence for AI through formal networks like the AI and digital regulations service in the health sector and the Digital Regulation Cooperation Forum (DRCF), other regulators have limited capacity and access to AI expertise. This creates the risk of inconsistent enforcement across regulators. There is also a risk that some regulators could begin to dominate and interpret the scope of their remit or role more broadly than may have been intended in order to fill perceived gaps in a way that increases incoherence and uncertainty’ (para 29), which points at a strong functional need for a centralised approach to AI regulation.

To try and mitigate those regulatory risks and shortcomings, the AI WP proposes the creation of ‘a number of central support functions’, such as [i} a central monitoring function of overall regulatory framework’s effectiveness and the implementation of the principles; [ii] central risk monitoring and assessment; [iii] horizon scanning; [iv] supporting testbeds and sandboxes; [v] advocacy, education and awareness-raising initiatives; or [vi] promoting interoperability with international regulatory frameworks (para 14, see also para 73). Cryptically, the AI WP indicates that ‘central support functions will initially be provided from within government but will leverage existing activities and expertise from across the broader economy’ (para 15). Quite how this can be effectively done outwith a clearly defined, adequately resourced and durable institutional framework is anybody’s guess. In fact, the AI WP recognises that this approach ‘needs to evolve’ and that Government needs to understand how ‘existing regulatory forums could be expanded to include the full range of regulators‘, what ‘additional expertise government may need’, and the ‘most effective way to convene input from across industry and consumers to ensure a broad range of opinions‘ (para 77).

While the creation of a regulator seems a rather obvious answer to all these questions, the AI WP has rejected it in unequivocal terms. Is the AI WP a U-turn waiting to happen? Is the mention that ‘[a]s we enter a new phase we will review the role of the AI Council and consider how best to engage expertise to support the implementation of the regulatory framework’ (para 78) a placeholder for an imminent project to rejig the AI Council and turn it into an AI regulator? What is the place and role of the Office for AI and the Centre for Data Ethics and Innovation in all this?

Moreover, the AI WP indicates that the ‘proposed framework is aligned with, and supplemented by, a variety of tools for trustworthy AI, such as assurance techniques, voluntary guidance and technical standards. Government will promote the use of such tools’ (para 16). Relatedly, the AI WP relies on those mechanisms to avoid addressing issues of accountability across AI life cycle, indicating that ‘[t]ools for trustworthy AI like assurance techniques and technical standards can support supply chain risk management. These tools can also drive the uptake and adoption of AI by building justified trust in these systems, giving users confidence that key AI-related risks have been identified, addressed and mitigated across the supply chain’ (para 84). Those tools are discussed in much more detail in part 4 of the AI WP (paras 106 ff). Annex A also creates a backdoor for technical standards to directly become the operationalisation of the general principles on which the regulatory framework is based, by explicitly identifying standards regulators may want to consider ‘to clarify regulatory guidance and support the implementation of risk treatment measures’.

This approach to the offloading of tricky regulatory issues to the emergence of private-sector led standards is simply an exercise in the transfer of regulatory power to those setting such standards, guidance and assurance techniques and, ultimately, a privatisation of AI regulation.

A different approach to sandboxes and testbeds?

The Government will take forward the GCSA recommendation to establish a regulatory sandbox for AI, which ‘will bring together regulators to support innovators directly and help them get their products to market. The sandbox will also enable us to understand how regulation interacts with new technologies and refine this interaction where necessary’ (p2). This thus is bound to hardwire some of the issues mentioned above in relation to the GCSA proposal, as well as being reflective of the general pro-industry approach of the AI WP, which is obvious in the framing that the regulators are expected to ‘support innovators directly and help them get their products to market’. Industrial policy seems to be shoehorned and mainstreamed across all areas of regulatory activity, at least in relation to AI (but it can then easily bleed into non-AI-related regulatory activities).

While the AI WP indicates the commitment to implement the AI sandbox recommended in the GCSA Report, it is by no means clear that the implementation will be in the way proposed in the report (ie a multi-regulator sandbox nested under DRCF, with an expectation that it would develop a crucial coordination and regulatory centralisation effect). The AI WP indicates that the Government still has to explore ‘what service focus would be most useful to industry’ in relation to AI sandboxes (para 96), but it sets out the intention to ‘focus an initial pilot on a single sector, multiple regulator sandbox’ (para 97), which diverges from the approach in the GCSA Report, which would be that of a sandbox for ‘multiple sectors, multiple regulators’. While the public consultation intends to gather feedback on which industry sector is the most appropriate, I would bet that the financial services sector will be chosen and that the ‘regulatory innovation’ will simply result in some closer cooperation between the ICO and FCA.

Regulator capabilities — ai regulation on a shoestring?

The AI WP turns to the issue of regulator capabilities and stresses that ‘While our approach does not currently involve or anticipate extending any regulator’s remit, regulating AI uses effectively will require many of our regulators to acquire new skills and expertise’ (para 102), and that the Government has ‘identified potential capability gaps among many, but not all, regulators’ (para 103).

To try to (start to) address this fundamental issue in the context of a devolved and decentralised regulatory framework, the AI WP indicates that the Government will explore, for example, whether it is ‘appropriate to establish a common pool of expertise that could establish best practice for supporting innovation through regulatory approaches and make it easier for regulators to work with each other on common issues. An alternative approach would be to explore and facilitate collaborative initiatives between regulators – including, where appropriate, further supporting existing initiatives such as the DRCF – to share skills and expertise’ (para 105).

While the creation of ‘common regulatory capacity’ has been advocated by the Alan Turing Institute, and while this (or inter-regulator secondments, for example) could be a short term fix, it seems that this tries to address the obvious challenge of adequately resourcing regulatory bodies without a medium and long-term strategy to build up the digital capability of the public sector, and to perpetuate the current approach to AI regulation on a shoestring. The governance and organisational implications arising from the creation of common pool of expertise need careful consideration, in particular as some of the likely dysfunctionalities are only marginally smaller than current over-reliance on external consultants, or the ‘salami-slicing’ approach to regulatory and policy interventions that seems to bleed from the ’agile’ management of technological projects into the realm of regulatory activity, which however requires institutional memory and the embedding of knowledge and expertise.

Procurement centralisation, digital technologies and competition (new working paper)

Source: Wikipedia.

I have just uploaded on SSRN the new working paper ‘Competition Implications of Procurement Digitalisation and the Procurement of Digital Technologies by Central Purchasing Bodies’, which I will present at the conference on “Centralization and new trends" to be held at the University of Copenhagen on 25-26 April 2023 (there is still time to register!).

The paper builds on my ongoing research on digital technologies and procurement governance, and focuses on the interaction between the strategic goals of procurement centralisation and digitalisation set by the European Commission in its 2017 public procurement strategy.

The paper identifies different ways in which current trends of procurement digitalisation and the challenges in procuring digital technologies push for further procurement centralisation. This is in particular to facilitate the extraction of insights from big data held by central purchasing bodies (CPBs); build public sector digital capabilities; and boost procurement’s regulatory gatekeeping potential. The paper then explores the competition implications of this technology-driven push for further procurement centralisation, in both ‘standard’ and digital markets.

The paper concludes by stressing the need to bring CPBs within the remit of competition law (which I had already advocated eg here), the opportunity to consider allocating CPB data management to a separate competent body under the Data Governance Act, and the related need to develop an effective system of mandatory requirements and external oversight of public sector digitalisation processes, specially to constrain CPBs’ (unbridled) digital regulatory power.

The full working paper reference is: A Sanchez-Graells, Albert, ‘Competition Implications of Procurement Digitalisation and the Procurement of Digital Technologies by Central Purchasing Bodies’ (March 2, 2023), Available at SSRN: https://ssrn.com/abstract=4376037. As always, any feedback most welcome: a.sanchez-graells@bristol.ac.uk.

AI regulation by contract: submission to UK Parliament

In October 2022, the Science and Technology Committee of the House of Commons of the UK Parliament (STC Committee) launched an inquiry on the ‘Governance of Artificial Intelligence’. This inquiry follows the publication in July 2022 of the policy paper ‘Establishing a pro-innovation approach to regulating AI’, which outlined the UK Government’s plans for light-touch AI regulation. The inquiry seeks to examine the effectiveness of current AI governance in the UK, and the Government’s proposals that are expected to follow the policy paper and provide more detail. The STC Committee has published 98 pieces of written evidence, including submissions from UK regulators and academics that will make for interesting reading. Below is my submission, focusing on the UK’s approach to ‘AI regulation by contract’.

A. Introduction

01. This submission addresses two of the questions formulated by the House of Commons Science and Technology Committee in its inquiry on the ‘Governance of artificial intelligence (AI)’. In particular:

  • How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

  • To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?

    • Is more legislation or better guidance required?

02. This submission focuses on the process of AI adoption in the public sector and, particularly, on the acquisition of AI solutions. It evidences how the UK is consolidating an inadequate approach to ‘AI regulation by contract’ through public procurement. Given the level of abstraction and generality of the current guidelines for AI procurement, major gaps in public sector digital capabilities, and potential structural conflicts of interest, procurement is currently an inadequate tool to govern the process of AI adoption in the public sector. Flanking initiatives, such as the pilot algorithmic transparency standard, are unable to address and mitigate governance risks. Contrary to the approach in the AI Regulation Policy Paper,[1] plugging the regulatory gap will require (i) new legislation supported by a new mechanism of external oversight and enforcement (an ‘AI in the Public Sector Authority’ (AIPSA)); (ii) a well-funded strategy to boost in-house public sector digital capabilities; and (iii) the introduction of a (temporary) mechanism of authorisation of AI deployment in the public sector. The Procurement Bill would not suffice to address the governance shortcomings identified in this submission.

B. ‘AI Regulation by Contract’ through Procurement

03. Unless the public sector develops AI solutions in-house, which is extremely rare, the adoption of AI technologies in the public sector requires a procurement procedure leading to their acquisition. This places procurement at the frontline of AI governance because the ‘rules governing the acquisition of algorithmic systems by governments and public agencies are an important point of intervention in ensuring their accountable use’.[2] In that vein, the Committee on Standards in Public Life stressed that the ‘Government should use its purchasing power in the market to set procurement requirements that ensure that private companies developing AI solutions for the public sector appropriately address public standards. This should be achieved by ensuring provisions for ethical standards are considered early in the procurement process and explicitly written into tenders and contractual arrangements’.[3] Procurement is thus erected as a public interest gatekeeper in the process of adoption of AI by the public sector.

04. However, to effectively regulate by contract, it is at least necessary to have (i) clarity on the content of the obligations to be imposed, (ii) effective enforcement mechanisms, and (iii) public sector capacity to establish, monitor, and enforce those obligations. Given that the aim of regulation by contract would be to ensure that the public sector only adopts trustworthy AI solutions and deploys them in a way that promotes the public interest in compliance with existing standards of protection of fundamental and individual rights, exercising the expected gatekeeping role in this context requires a level of legal, ethical, and digital capability well beyond the requirements of earlier instances of regulation by contract to eg enforce labour standards.

05. On a superficial reading, it could seem that the National AI Strategy tackled this by highlighting the importance of the public sector’s role as a buyer and stressing that the Government had already taken steps ‘to inform and empower buyers in the public sector, helping them to evaluate suppliers, then confidently and responsibly procure AI technologies for the benefit of citizens’.[4] The National AI Strategy referred, in particular, to the setting up of the Crown Commercial Service’s AI procurement framework (the ‘CCS AI Framework’),[5] and the adoption of the Guidelines for AI procurement (the ‘Guidelines’)[6] as enabling tools. However, a close look at these instruments will show their inadequacy to provide clarity on the content of procedural and contractual obligations aimed at ensuring the goals stated above (para 03), as well as their potential to widen the existing public sector digital capability gap. Ultimately, they do not enable procurement to carry out the expected gatekeeping role.

C. Guidelines and Framework for AI procurement

06. Despite setting out to ‘provide a set of guiding principles on how to buy AI technology, as well as insights on tackling challenges that may arise during procurement’, the Guidelines provide high-level recommendations that cannot be directly operationalised by inexperienced public buyers and/or those with limited digital capabilities. For example, the recommendation to ‘Try to address flaws and potential bias within your data before you go to market and/or have a plan for dealing with data issues if you cannot rectify them yourself’ (guideline 3) not only requires a thorough understanding of eg the Data Ethics Framework[7] and the Guide to using Artificial Intelligence in the public sector,[8] but also detailed insights on data hazards.[9] This leads the Guidelines to stress that it may be necessary ‘to seek out specific expertise to support this; data architects and data scientists should lead this process … to understand the complexities, completeness and limitations of the data … available’.

07. Relatedly, some of the recommendations are very open ended in areas without clear standards. For example, the effectiveness of the recommendation to ‘Conduct initial AI impact assessments at the start of the procurement process, and ensure that your interim findings inform the procurement. Be sure to revisit the assessments at key decision points’ (guideline 4) is dependent on the robustness of such impact assessments. However, the Guidelines provide no further detail on how to carry out such assessments, other than a list of some generic areas for consideration (eg ‘potential unintended consequences’) and a passing reference to emerging guidelines in other jurisdictions. This is problematic, as the development of algorithmic impact assessments is still at an experimental stage,[10] and emerging evidence shows vastly diverging approaches, eg to risk identification.[11] In the absence of clear standards, algorithmic impact assessments will lead to inconsistent approaches and varying levels of robustness. The absence of standards will also require access to specialist expertise to design and carry out the assessments.

08. Ultimately, understanding and operationalising the Guidelines requires advanced digital competency, including in areas where best practices and industry standards are still developing.[12] However, most procurement organisations lack such expertise, as a reflection of broader digital skills shortages across the public sector,[13] with recent reports placing civil service vacancies for data and tech roles throughout the civil service alone close to 4,000.[14] This not only reduces the practical value of the Guidelines to facilitate responsible AI procurement by inexperienced buyers with limited capabilities, but also highlights the role of the CCS AI Framework for AI adoption in the public sector.

09. The CCS AI Framework creates a procurement vehicle[15] to facilitate public buyers’ access to digital capabilities. CCS’ description for public buyers stresses that ‘If you are new to AI you will be able to procure services through a discovery phase, to get an understanding of AI and how it can benefit your organisation.’[16] The Framework thus seeks to enable contracting authorities, especially those lacking in-house expertise, to carry out AI procurement with the support of external providers. While this can foster the uptake of AI in the public sector in the short term, it is highly unlikely to result in adequate governance of AI procurement, as this approach focuses at most on the initial stages of AI adoption but can hardly be sustainable throughout the lifecycle of AI use in the public sector—and, crucially, would leave the enforcement of contractualised AI governance obligations in a particularly weak position (thus failing to meet the enforcement requirement at para 04). Moreover, it would generate a series of governance shortcomings which avoidance requires an alternative approach.

D. Governance Shortcomings

10. Despite claims to the contrary in the National AI Strategy (above para 05), the approach currently followed by the Government does not empower public buyers to responsibly procure AI. The Guidelines are not susceptible of operationalisation by inexperienced public buyers with limited digital capabilities (above paras 06-08). At the same time, the Guidelines are too generic to support sophisticated approaches by more advanced digital buyers. The Guidelines do not reduce the uncertainty and complexity of procuring AI and do not include any guidance on eg how to design public contracts to perform the regulatory functions expected under the ‘AI regulation by contract’ approach.[17] This is despite existing recommendations on eg the development of ‘model contracts and framework agreements for public sector procurement to incorporate a set of minimum standards around ethical use of AI, with particular focus on expected levels transparency and explainability, and ongoing testing for fairness’.[18] The guidelines thus fail to address the first requirement for effective regulation by contract in relation to clarifying the relevant obligations (para 04).

11. The CCS Framework would also fail to ensure the development of public sector capacity to establish, monitor, and enforce AI governance obligations (para 04). Perhaps counterintuitively, the CCS AI Framework can generate a further disempowerment of public buyers seeking to rely on external capabilities to support AI adoption. There is evidence that reliance on outside providers and consultants to cover immediate needs further erodes public sector capability in the long term,[19] as well as creating risks of technical and intellectual debt in the deployment of AI solutions as consultants come and go and there is no capture of institutional knowledge and memory.[20] This can also exacerbate current trends of pilot AI graveyard spirals, where most projects do not reach full deployment, at least in part due to insufficient digital capabilities beyond the (outsourced) pilot phase. This tends to result in self-reinforcing institutional weaknesses that can limit the public sector’s ability to drive digitalisation, not least because technical debt quickly becomes a significant barrier.[21] It also runs counter to best practices towards building public sector digital maturity,[22] and to the growing consensus that public sector digitalisation first and foremost requires a prioritised investment in building up in-house capabilities.[23] On this point, it is important to note the large size of the CCS AI Framework, which was initially pre-advertised with a £90 mn value,[24] but this was then revised to £200 mn over 42 months.[25] Procuring AI consultancy services under the Framework can thus facilitate the funnelling of significant amounts of public funds to the private sector, rather than using those funds to build in-house capabilities. It can result in multiple public buyers entering contracts for the same expertise, which thus duplicates costs, as well as in a cumulative lack of institutional learning by the public sector because of atomised and uncoordinated contractual relationships.

12. Beyond the issue of institutional dependency on external capabilities, the cumulative effect of the Guidelines and the Framework would be to outsource the role of ‘AI regulation by contract’ to unaccountable private providers that can then introduce their own biases on the substantive and procedural obligations to be embedded in the relevant contracts—which would ultimately negate the effectiveness of the regulatory approach as a public interest safeguard. The lack of accountability of external providers would not only result from the weakness (or absolute inability) of the public buyer to control their activities and challenge important decisions—eg on data governance, or algorithmic impact assessments, as above (paras 06-07)—but also from the potential absence of effective and timely external checks. Market mechanisms are unlikely to deliver adequate checks due market concentration and structural conflicts of interest affecting both providers that sometimes provide consultancy services and other times are involved in the development and deployment of AI solutions,[26] as well as a result of insufficiently effective safeguards on conflicts of interest resulting from quickly revolving doors. Equally, broader governance controls are unlikely to be facilitated by flanking initiatives, such as the pilot algorithmic transparency standard.

13. To try to foster accountability in the adoption of AI by the public sector, the UK is currently piloting an algorithmic transparency standard.[27] While the initial six examples of algorithmic disclosures published by the Government provide some details on emerging AI use cases and the data and types of algorithms used by publishing organisations, and while this information could in principle foster accountability, there are two primary shortcomings. First, completing the documentation requires resources and, in some respects, advanced digital capabilities. Organisations participating in the pilot are being supported by the Government, which makes it difficult to assess to what extent public buyers would generally be able to adequately prepare the documentation on their own. Moreover, the documentation also refers to some underlying requirements, such as algorithmic impact assessments, that are not yet standardised (para 07). In that, the pilot standard replicates the same shortcomings discussed above in relation to the Guidelines. Algorithmic disclosure will thus only be done by entities with high capabilities, or it will be outsourced to consultants (thus reducing the scope for the revelation of governance-relevant information).

14. Second, compliance with the standard is not mandatory—at least while the pilot is developed. If compliance with the algorithmic transparency standard remains voluntary, there are clear governance risks. It is easy to see how precisely the most problematic uses may not be the object of adequate disclosures under a voluntary self-reporting mechanism. More generally, even if the standard was made mandatory, it would be necessary to implement an external quality control mechanism to mitigate problems with the quality of self-reported disclosures that are pervasive in other areas of information-based governance.[28] Whether the Central Digital and Data Office (currently in charge of the pilot) would have capacity (and powers) to do so remains unclear, and it would in any case lack independence.

15. Finally, it should be stressed that the current approach to transparency disclosure following the adoption of AI (ex post) can be problematic where the implementation of the AI is difficult to undo and/or the effects of malicious or risky AI are high stakes or impossible to revert. It is also problematic in that the current approach places the burden of scrutiny and accountability outside the public sector, rather than establishing internal, preventative (ex ante) controls on the deployment of AI technologies that could potentially be very harmful for fundamental and individual socio-economic rights—as evidenced by the inclusion of some fields of application of AI in the public sector as ‘high risk’ in the EU’s proposed EU AI Act.[29] Given the particular risks that AI deployment in the public sector poses to fundamental and individual rights, the minimalistic and reactive approach outlined in the AI Regulation Policy Paper is inadequate.

E. Conclusion: An Alternative Approach

16. Ensuring that the adoption of AI in the public sector operates in the public interest and for the benefit of all citizens will require new legislation supported by a new mechanism of external oversight and enforcement. New legislation is required to impose specific minimum requirements of eg data governance and algorithmic impact assessment and related transparency across the public sector. Such legislation would then need to be developed in statutory guidance of a much more detailed and actionable nature than the current Guidelines. These developed requirements can then be embedded into public contracts by reference. Without such clarification of the relevant substantive obligations, the approach to ‘AI regulation by contract’ can hardly be effective other than in exceptional cases.

17. Legislation would also be necessary to create an independent authority—eg an ‘AI in the Public Sector Authority’ (AIPSA)—with powers to enforce those minimum requirements across the public sector. AIPSA is necessary, as oversight of the use of AI in the public sector does not currently fall within the scope of any specific sectoral regulator and the general regulators (such as the Information Commissioner’s Office) lack procurement-specific knowledge. Moreover, units within Cabinet Office (such as the Office for AI or the Central Digital and Data Office) lack the required independence.

18. It would also be necessary to develop a clear and sustainably funded strategy to build in-house capability in the public sector, including clear policies on the minimisation of expenditure directed at the engagement of external consultants and the development of guidance on how to ensure the capture and retention of the knowledge developed within outsourced projects (including, but not only, through detailed technical documentation).

19. Until sufficient in-house capability is built to ensure adequate understanding and ability to manage digital procurement governance requirements independently, the current reactive approach should be abandoned, and AIPSA should have to approve all projects to develop, procure and deploy AI in the public sector to ensure that they meet the required legislative safeguards in terms of data governance, impact assessment, etc. This approach could progressively be relaxed through eg block exemption mechanisms, once there is sufficiently detailed understanding and guidance on specific AI use cases and/or in relation to public sector entities that could demonstrate sufficient in-house capability, eg through a mechanism of independent certification.

20. The new legislation and statutory guidance would need to be self-standing, as the Procurement Bill would not provide the required governance improvements. First, the Procurement Bill pays limited to no attention to artificial intelligence and the digitalisation of procurement.[30] An amendment (46) that would have created minimum requirements on automated decision-making and data ethics was not moved at the Lords Committee stage, and it seems unlikely to be taken up again at later stages of the legislative process. Second, even if the Procurement Bill created minimum substantive requirements, it would lack adequate enforcement mechanisms, not least due to the limited powers and lack of independence of the foreseen Procurement Review Unit (to also sit within Cabinet Office).

_______________________________________
Note: all websites last accessed on 25 October 2022.

[1] Department for Digital, Culture, Media and Sport, Establishing a pro-innovation approach to regulating AI. An overview of the UK’s emerging approach (CP 728, 2022).

[2] Ada Lovelace Institute, AI Now Institute and Open Government Partnership, Algorithmic Accountability for the Public Sector (August 2021) 33.

[3] Committee on Standards in Public Life, Intelligence and Public Standards (2020) 51.

[4] Department for Digital, Culture, Media and Sport, National AI Strategy (CP 525, 2021) 47.

[5] AI Dynamic Purchasing System < https://www.crowncommercial.gov.uk/agreements/RM6200 >.

[6] Office for Artificial Intelligence, Guidelines for AI Procurement (2020) < https://www.gov.uk/government/publications/guidelines-for-ai-procurement/guidelines-for-ai-procurement >.

[7] Central Digital and Data Office, Data Ethics Framework (Guidance) (2020) < https://www.gov.uk/government/publications/data-ethics-framework >.

[8] Central Digital and Data Office, A guide to using artificial intelligence in the public sector (2019) < https://www.gov.uk/government/collections/a-guide-to-using-artificial-intelligence-in-the-public-sector >.

[9] See eg < https://datahazards.com/index.html >.

[10] Ada Lovelace Institute, Algorithmic impact assessment: a case study in healthcare (2022) < https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ >.

[11] A Sanchez-Graells, ‘Algorithmic Transparency: Some Thoughts On UK's First Four Published Disclosures and the Standards’ Usability’ (2022) < https://www.howtocrackanut.com/blog/2022/7/11/algorithmic-transparency-some-thoughts-on-uk-first-disclosures-and-usability >.

[12] A Sanchez-Graells, ‘“Experimental” WEF/UK Guidelines for AI Procurement: Some Comments’ (2019) < https://www.howtocrackanut.com/blog/2019/9/25/wef-guidelines-for-ai-procurement-and-uk-pilot-some-comments >.

[13] See eg Public Accounts Committee, Challenges in implementing digital change (HC 2021-22, 637).

[14] S Klovig Skelton, ‘Public sector aims to close digital skills gap with private sector’ (Computer Weekly, 4 Oct 2022) < https://www.computerweekly.com/news/252525692/Public-sector-aims-to-close-digital-skills-gap-with-private-sector >.

[15] It is a dynamic purchasing system, or a list of pre-screened potential vendors public buyers can use to carry out their own simplified mini-competitions for the award of AI-related contracts.

[16] Above (n 5).

[17] This contrasts with eg the EU project to develop standard contractual clauses for the procurement of AI by public organisations. See < https://living-in.eu/groups/solutions/ai-procurement >.

[18] Centre for Data Ethics and Innovation, Review into bias in algorithmic decision-making (2020) < https://www.gov.uk/government/publications/cdei-publishes-review-into-bias-in-algorithmic-decision-making/main-report-cdei-review-into-bias-in-algorithmic-decision-making >.

[19] V Weghmann and K Sankey, Hollowed out: The growing impact of consultancies in public administrations (2022) < https://www.epsu.org/sites/default/files/article/files/EPSU%20Report%20Outsourcing%20state_EN.pdf >.

[20] A Sanchez-Graells, ‘Identifying Emerging Risks in Digital Procurement Governance’ in idem, Digital Technologies and Public Procurement. Gatekeeping and experimentation in digital public governance (OUP, forthcoming) < https://ssrn.com/abstract=4254931 >.

[21] M E Nielsen and C Østergaard Madsen, ‘Stakeholder influence on technical debt management in the public sector: An embedded case study’ (2022) 39 Government Information Quarterly 101706.

[22] See eg Kevin C Desouza, ‘Artificial Intelligence in the Public Sector: A Maturity Model’ (2021) IBM Centre for the Business of Government < https://www.businessofgovernment.org/report/artificial-intelligence-public-sector-maturity-model >.

[23] A Clarke and S Boots, A Guide to Reforming Information Technology Procurement in the Government of Canada (2022) < https://govcanadacontracts.ca/it-procurement-guide/ >.

[24] < https://ted.europa.eu/udl?uri=TED:NOTICE:600328-2019:HTML:EN:HTML&tabId=1&tabLang=en >.

[25] < https://ted.europa.eu/udl?uri=TED:NOTICE:373610-2020:HTML:EN:HTML&tabId=1&tabLang=en >.

[26] See S Boots, ‘“Charbonneau Loops” and government IT contracting’ (2022) < https://sboots.ca/2022/10/12/charbonneau-loops-and-government-it-contracting/ >.

[27] Central Digital and Data Office, Algorithmic Transparency Standard (2022) < https://www.gov.uk/government/collections/algorithmic-transparency-standard >.

[28] Eg in the context of financial markets, there have been notorious ongoing problems with ensuring adequate quality in corporate and investor disclosures.

[29] < https://artificialintelligenceact.eu/ >.

[30] P Telles, ‘The lack of automation ideas in the UK Gov Green Paper on procurement reform’ (2021) < http://www.telles.eu/blog/2021/1/13/the-lack-of-automation-ideas-in-the-uk-gov-green-paper-on-procurement-reform >.

New paper on procurement corruption and AI

I have just uploaded a new paper on SSRN: ‘Procurement corruption and artificial intelligence: between the potential of enabling data architectures and the constraints of due process requirements’, to be published in S. Williams-Elegbe & J. Tillipman (eds), Routledge Handbook of Public Procurement Corruption (forthcoming). In this paper, I reflect on the potential improvements that using AI for anti-corruption purposes can practically have in the current (and foreseeable) context of AI development, (lack of) procurement and other data, and existing due process constraints on the automation or AI-support of corruption-related procurement decision-making (such as eg debarment/exclusion or the imposition of fines). The abstract is as follows:

This contribution argues that the expectations around the deployment of AI as an anti-corruption tool in procurement need to be tamed. It explores how the potential applications of AI replicate anti-corruption interventions by human officials and, as such, can only provide incremental improvements but not a significant transformation of anti-corruption oversight and enforcement architectures. It also stresses the constraints resulting from existing procurement data and the difficulties in creating better, unbiased datasets and algorithms in the future, which would also generate their own corruption risks. The contribution complements this technology-centred analysis with a critical assessment of the legal constraints based on due process rights applicable even when AI supports continued human intervention. This in turn requires a close consideration of the AI-human interaction, as well as a continuation of the controls applicable to human decision-making in corruption-prone activities. The contribution concludes, first, that prioritising improvements in procurement data capture, curation and interconnection is a necessary but insufficient step; and second, that investments in AI-based anti-corruption tools cannot substitute, but only complement, current anti-corruption approaches to procurement.

As always, feedback more than welcome. Not least, because I somehow managed to write this ahead of the submission deadline, so I would have time to adjust things ahead of publication. Thanks in advance: a.sanchez-graells@bristol.ac.uk.

The institutional framework of the UK/EU Trade and Cooperation agreement — Public Procurement

AdobeStock_221858285.jpeg

The UK Parliament’s European Scrutiny Committee is conducting an inquiry into the ‘The institutional framework of the UK/EU Trade and Cooperation agreement’, which will remain open until 24 September 2021. In July, I submitted the written evidence below, which has now been published by the Committee. As always, comments or feedback most welcome (a.sanchez-graells@bristol.ac.uk).

I look forward to further outputs of this inquiry, as the functioning and effectiveness of the governance mechanisms (rushedly) created in the EU-UK TCA will take some time to fully understand.

Written Evidence to the House of Commons European Scrutiny Committee on “The institutional framework of the UK/EU Trade and Cooperation agreement”

Submitted 20 July 2021
By Professor Albert Sanchez-Graells
Professor of Economic Law
Co-Director, Centre for Global Law and Innovation
University of Bristol Law School
a.sanchez-graells@bristol.ac.uk

Submission

This document addresses some of the questions formulated by the House of Commons European Scrutiny Committee in its inquiry on “The institutional framework of the UK/EU Trade and Cooperation agreement” and, in particular:

  • What are the most important powers of the Trade and Cooperation Agreement (TCA) Partnership Council and the different Specialised Committees and what could the practical impact of the exercise of these powers be?

  • "What are the key features of the dispute resolution procedures provided for in the TCA and what are the likely legal and policy implications of these for the UK? How closely do they follow precedent in other trade agreements and do they raise any concerns with respect to the UK’s regulatory autonomy?

  • How could the UK/EU TCA institutions be utilised by the UK and EU to raise and, where possible, address, concerns about legal and policy developments on the other side which are of importance to them respectively (e.g. for the UK, changes in EU regulation in key areas like financial services, pharmaceuticals and energy)?

  • What should the Government’s approach to representing the UK in meetings of the TCA’s joint bodies be? Should the Devolved Administrations be involved in discussions that relate to devolved competences?

1. Background

01. This submission focuses on the field of public procurement, which is of primary economic interest to both the UK and the EU. According to a recent report for the European Commission,[1] cross-border procurement from the EU27 represented on average 20% by value of the UK’s total procurement expenditure for the period 2016-2019.[2] In turn, cross-border procurement from the UK represented on average 15% by value of EU27 procurement expenditure for the same period.[3] Most of this cross-border procurement was indirect (17.6% for EU27 in UK, and 9% for UK in EU 27), meaning that tenders were won by companies located in the same country as the contracting authority but controlled by companies in a foreign country[4]—in most common cases, this meant that public contracts were awarded to subsidiaries of large foreign corporate groups, or to SMEs controlled by those groups. Direct cross-border procurement—where contracts are awarded to companies located in a foreign country, which are either independent or controlled by companies in the same or a third foreign country—had a smaller but still relevant economic scale (2.3% for EU27 in UK, and 6% for UK in EU 27).

02. The economic relevance of both types of cross-border procurement is reflected in the bilateral market access commitments resulting from the UK’s accession to (and the EU’s continued membership of) the World Trade Organisation Government Procurement Agreement (WTO GPA),[5] and the additional bilateral market access commitments in the UK-EU Trade and Cooperation Agreement (TCA)[6]—which Annex 25 largely replicates the pre-Brexit reciprocal market access commitments between the UK and EU27,[7] with the only exception of the explicit exclusion of healthcare services. However, given that the pre-Brexit procurement-related import penetration for human health services had an average value close to null percent of public expenditure in both the UK and most EU27 countries,[8] this exclusion is unlikely to have significant practical effects.

03. The TCA contains several relevant provisions to facilitate direct and indirect cross-border trade through the award of public contracts in Title VI of Heading One of Part Two (Arts 276 and ff). Of those provisions, and particularly in view of the UK’s intended reform of domestic procurement rules,[9] the rules more likely to trigger practical implementation issues seem to be: Article 280 on supporting evidence; Article 281 on conditions for participation relating to prior experience; Article 282 on registration systems and qualification procedures; Article 284 on abnormally low prices, in particular as it relates to subsidy control issues;[10] Article 285 on environmental, social and labour considerations;[11] Article 286 on review procedures; and Article 288 on the national treatment of locally established suppliers, which is applicable beyond ‘covered procurement’ (Art 277) and of particular importance to indirect cross-border procurement. The TCA also includes specific rules for the modification and rectification of market access commitments (Arts 289 to 293), which can become highly relevant if new trading patterns emerge during the implementation of the TCA that show a rebalancing of previous trends (see above para 01).

04. Institutionally, in addition to being under the general powers of the Partnership Council (Art 7(3)), public procurement regulation falls within the remit of the Trade Partnership Committee (Art 8(1)(a)), and even more specifically within the remit of the Trade Specialised Committee on Public Procurement (Art 8(1)(h), the ‘TSC on Procurement’), which is specifically tasked with addressing matters covered by Title VI of Heading One of Part Two, under the supervision of the Trade Partnership Committee (Art 8(2)(d)).[12] The TSC on Procurement is meant as the primary forum for the Parties to exchange information, discuss best practices and share implementation experience (Art 8(3)(f)), and has the tasks of monitoring the implementation of the procurement title of the TCA (Art 8(3)(a)) and discussing technical issues arising from TCA implementation (Art 8(3)(e)).

05. It can be expected that any future disputes over the regulation of public procurement will first emerge in the context of the activities of the TSC on Procurement, with potential escalation to the Trade Partnership Committee so that it can exercise its function of exploring the most appropriate way to prevent or solve any difficulty that may arise in relation to the interpretation and application of the TCA (Art 8(2)(e)); further escalation to the Partnership Council in relation to its power to make recommendations to the Parties regarding the implementation and application of the TCA (Art 7(4)(b)); and, ultimately, the possible launch of a formal dispute under Title I of Part Six of the TCA. Therefore, this submission will be primarily concerned with the configuration and likely operation of the TSC on Procurement and will only touch briefly on the more general powers of the Trade Partnership Committee and the Partnership Council. Dispute resolution mechanisms are not considered, except in relation to the potential overlap with those of the WTO Government Procurement Agreement.

2. Powers of the TSC on Procurement and of the Trade Partnership Committee, and practical impact of their exercise

06. The powers of the TSC on Procurement, like those of all other Trade Specialised Committees, are detailed in Article 8(3) TCA. Other than the general powers to monitor the implementation of the TCA, discuss technical issues and provide an information exchange forum mentioned above (para 04), the most important practical power would seem to be that of adopting decisions where the TCA (or a supplementing agreement) so provides (Art 8(3)(d)). However, it should be noted that the TCA does not foresee this possibility and that the TSC on Procurement is only mentioned in the provision that envisages its creation (Art 8(1)(h)). Therefore, the TSC on Procurement is currently devoid of decision-making powers and it can only be seen as a consultative technical forum primarily geared towards information exchange and technical dialogue. This is reflected in eg the way the European Commission presents the role of the TSC on Procurement, which is only envisaged as a feeder mechanism towards discussions at the Trade Partnership Committee, seen as the ‘principal formation for trade matters’.[13] This is also reflected in the current UK Government’s view of the TSC on Procurement.[14] Logically, it should also be the forum for the setting of common approaches to the UK and EU’s cooperation in the international promotion of the mutual liberalisation of public procurement markets (Art 294(1)), and the most suitable forum for the mutual provision of annual statistics on covered procurement (Art 294(2)).

07. It should also be stressed that the TSC on Procurement and the Trade Partnership Committee are not involved in the procedures leading to the modification or rectification of the market access commitments of the UK and the EU under the TCA (Arts 289 to 293). Indeed, these procedures are foreseen as strictly bilateral. While it is possible (and likely) that any discussions and possible consultations launched by one of the Parties in relation to market access commitments are initially hosted in the TSC on Procurement, it is clear that the latter has no decision-making powers. It is also clear that the only power of the Trade Partnership Committee in relation to market access commitments is to formally amend the relevant Sub-section under Section B of Annex 25 once these have been mutually agreed, or as a result of a final decision ending a dispute (Art 293).

08. On the whole, the TCA does not grant any of its bodies with decision-making powers regarding the regulation of public procurement or their mutual market access commitments and, as a consequence, any future changes and any related disputes will remain strictly inter-governmental, with the TSC on Procurement and the Trade Partnership Committee simply serving as a forum for the discussion of the relevant issues and for the exploration of amicable solutions that could prevent the launch of a formal dispute under Title I of Part Six of the TCA.

3. Dual dispute resolution regime

09. In case disputes could not be solved, it should be considered that there is a dual regime applicable in case of the TCA’s procurement obligations that are ‘substantially equivalent’ to those resulting from the WTO GPA. Given that the TCA procurement rules are clearly based on the GPA (GPA+ approach), and that a significant part of the market access commitments directly derive from the UK’s and the EU’s GPA coverage schedules, this can be the case of the majority of potential disputes arising from the implementation of the TCA.

10. In connection to the dual dispute resolution regime, it should be noted that Article XX of the GPA provides that the WTO's Understanding on Rules and Procedures Governing the Settlement of Disputes also applies to disputes under the GPA. Therefore, as foreseen in Article 737 TCA, the party seeking redress would be able to select the forum in which to settle the dispute and, once chosen, it would be barred from initiating procedures under the other international agreement, unless the forum selected first failed to make findings for procedural or jurisdictional reasons. It is difficult to establish which of the two available routes is more likely to be used in case of a dispute under the TCA procurement rules, but it would seem that the TCA-specific dispute resolution mechanism would allow the UK and the EU to have their interests taken into account within the specific context of their bilateral relationship, rather than in the broader context of the multilateral relationships emerging from the WTO GPA. In that regard, this could be the preferable route.

4. How to best utilise these fora to address legal and policy developments

11. Like in most other trade areas, one of the challenges in keeping open trade in procurement markets across the UK and the EU concerns non-tariff barriers. This is clearly recognised in the TCA, for example in relation to documentary requirements applicable to the participation in tenders for public contracts (Art 280),[15] or concerning conditions for participation such as prior experience (Art 281).[16] One of the main risks going forward is that, in seeking to leverage public expenditure to achieve environmental and social goals (but also economic recovery goals, post-pandemic), both the UK and the EU are likely to create both mandatory and discretionary requirements that will increase compliance costs for economic operators seeking to tender for public contracts both in the EU and in the UK, as well as potential (implicit) preferential treatment for domestic suppliers. A clear recent example can be found in the UK’s policy on ‘net zero’ for major government contracts, which seeks to impose ‘as a selection criterion, a requirement for bidding suppliers to provide a Carbon Reduction Plan (using the template at Annex A) confirming the supplier’s commitment to achieving Net Zero by 2050 in the UK, and setting out the environmental management measures that they have in place and which will be in effect and utilised during the performance of the contract’.[17] This could disadvantage tenderers with no specific plans coming from jurisdictions without such a requirement, as well as those with net zero plans with a different time horizon, or with a different geographical concentration, which could nonetheless be in compliance with the requirements applicable in the EU. It is easy to imagine alternative scenarios where the disadvantage could be against UK-based tenderers, or their EU subsidiaries. Therefore, one of the main roles of the TCA fora, and in particular the TSC on Procurement, should be to minimise trade friction resulting from this type of initiatives, ideally by discussion of options and the co-creation of acceptable common solutions ahead of their adoption in law or policy. There is a potential overlap between the work on general standardisation issues, covered by other parts of the TCA, and procurement-specific standardisation. However, given the current trend of leveraging procurement to achieve environmental, social and economic/industrial goals, it is likely that a large number of non-tariff barriers will be procurement-specific.

12. Conversely, another of the challenges in procurement regulation going forward will be tackling challenges that exceed the regulatory capacity and purchasing power of a single State, or which are much more likely to be successful if undertaken as part of an international collaboration. The development of adequate frameworks for the procurement of Artificial Intelligence (AI), and for the deployment of AI in the management of procurement are clear examples, where the UK has positioned itself as a frontrunner.[18] In these fields, seeking regulatory collaboration would be to both the UK and EU’s advantage, as their united approach to procurement regulation should not only encompass market liberalisation (Art 294), but also broader issues.

13. As emerges from the previous two paragraphs, it seems that the best use of the institutional mechanisms created by the TCA is one premised on a proactive approach to maintaining and developing regulatory convergence. This could work well, given the starting point of almost complete alignment of UK and EU procurement regulation and policy,[19] and pre-empt the emergence of disputes resulting from uncoordinated legislative and policy reforms.

14. By contrast, one of the worse possible uses of the TCA institutional framework would be to use it to channel disputes concerning single tender procurement disputes, which would likely unavoidably lead to a quick escalation of highly politicised disputes. Both parties should be able to resist political pressures to bring to these fora issues that must be adjudicated through the domestic review procedures implementing the obligations resulting from Article 286 TCA (and equivalent WTO GPA obligations).

5. UK position and participation of the Devolved Administrations

15. There is a Provisional Public Procurement Common Framework of March 2021 that sets out proposed four-nation ways of working for domestic and international public procurement policy and legislation. It is intended to guide the actions of policy officials of all four nations as they develop policies on public procurement.[20] Notably, there were two sections of the Common Framework that were still under discussion at the time of its publication: one on UK Government engagement with the Devolved Administrations on WTO GPA business; and another one to reflect International Agreements.

16. It seems impractical to have different arrangements for the participation of the Devolved Administrations on WTO GPA and on UK-EU TCA business, in particular given the significant overlap between both sets of regulatory instruments. A common approach should be developed for both situations and included in the final version of the Public Procurement Common Framework. It would seem advisable to have a flexible system whereby the standard procedure is for a single four-nations position to be agreed ahead of the UK’s engagement in discussions with the EU in the context of the TCA institutions, but where it should also be possible for a representative of a Devolved Administration to directly participate in discussions concerning nation-specific matters. This could be the case, for example, where one of the four nations took a different approach to a specific issue and that was queried by the EU.

­­­­­­­­­­­­­­­­­­­­­___________________

Biographical information

Professor Albert Sanchez-Graells is a Professor of Economic Law at the University of Bristol Law School and Co-Director of its Centre for Global Law and Innovation. He is also a former Member of the European Commission Stakeholder Expert Group on Public Procurement (2015-18) and of the Procurement Lawyers’ Association Brexit Working Group (2017), as well as a current Member of the European Procurement Law Group.

Albert is a specialist in European economic law, with a focus on competition law and procurement. His research concentrates on the way the public sector interacts with the market and how it organises the delivery of public services, especially healthcare. He is also interested in general issues of sectorial regulation and, more broadly, in the rules supporting the development and expansion of the European Union's internal market, as well as the EU’s trade relationships with third countries, including the UK.

His influential publications include the leading monograph Public Procurement and the EU Competition Rules, 2nd edn (Bloomsbury-Hart, 2015). He has also co-authored Shaping EU Public Procurement Law: A Critical Analysis of the CJEU Case Law 2015–2017 (Wolters-Kluwer, 2018), edited Smart Public Procurement and Labour Standards. Pushing the Discussion after RegioPost (Hart, 2018), and coedited Reformation or Deformation of the Public Procurement Rules (Edward Elgar, 2016), Transparency in EU Procurements. Disclosure Within Public Procurement and During Contract Execution (Edward Elgar, 2019) and European Public Procurement. Commentary on Directive 2014/24/EU (Edward Elgar, 2021). Most of his working papers are available at http://ssrn.com/author=542893 and his analysis of current legal developments is published in his blog http://www.howtocrackanut.com.

___________________

Notes

[All websites last visited on 20 July 2021.]

[1] Prometeia SpA, BIP Business Integration Partners – Spa, Economics for Policy a knowledge Center of Nova School of Business and Economics Lisboa, Study on the measurement of cross-border penetration in the EU public procurement market. Final report (Mar 2021), available at https://op.europa.eu/s/pmUR.

[2] These figures aggregate direct and indirect procurement as reported in Table 2-5 of the Report (n 1).

[3] These figures aggregate direct and indirect procurement as reported in Tables 2-6 and 2-8 of the Report (n 1).

[4] Report (n 1) 18.

[5] WTO, Revised Agreement on Government Procurement and WTO related legal instruments (2012) available at https://www.wto.org/english/docs_e/legal_e/rev-gpr-94_01_e.pdf.

[6] Trade and Cooperation Agreement between the United Kingdom of Great Britain and Northern Ireland, of the one part, and the European Union and the European Atomic Energy Community, of the other part, made in Brussels and London, 30 December 2020. Treaty Series No.8 (2021), available at https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/982648/TS_8.2021_UK_EU_EAEC_Trade_and_Cooperation_Agreement.pdf.

[7] In part, this is a result of incorporating the UK’s and EU27’s market access commitments under the WTO GPA; Article 277(1) UK-EU TCA. See A Sanchez-Graells, ‘Public procurement regulation’, in H Kassim, S Ennis and A Jordan (eds), UK Regulation after Brexit (Feb 2021) 23-24, available at https://ukandeu.ac.uk/wp-content/uploads/2021/02/UK-regulation-after-Brexit.pdf.

[8] See Table 1-6 of the Report (n 1).

[9] Cabinet Office, Green Paper Transforming Public Procurement (15 Dec 2020), available at https://www.gov.uk/government/consultations/green-paper-transforming-public-procurement. For analysis, see A Sanchez-Graells, “The UK’s Green Paper on Post-Brexit Public Procurement Reform: Transformation or Overcomplication?” (2021) 16(1) European Procurement & Public Private Partnership Law Review 4-18, pre-print version available at https://ssrn.com/abstract=3787380.

[10] Subsidy control issues are not covered in detail in this written submission, as they are the object of parallel regulation in the UK-EU TCA.

[11] Environmental, social and labour considerations are not covered in detail in this written submission, as they are the object of parallel regulation in the UK-EU TCA.

[12] For a general description of the governance and dispute resolution mechanisms in the TCA, see House of Commons Library (S Fella), The UK-EU Trade and Cooperation Agreement: governance and dispute settlement (19 February 2021) Briefing Paper Num. 9139, available at https://researchbriefings.files.parliament.uk/documents/CBP-9139/CBP-9139.pdf, and idem, ‘Governing the new UK-EU relationship and resolving disputes’ (24 Feb 2021), available at https://commonslibrary.parliament.uk/governing-the-new-uk-eu-relationship-and-resolving-disputes/.

[13] European Commission, Trade Policy, UK fact sheet (undated), available at https://ec.europa.eu/trade/policy/countries-and-regions/countries/united-kingdom/.

[14] See eg answer to written question UIN 25876 of 1 July 2021, available at https://questions-statements.parliament.uk/written-questions/detail/2021-07-01/25876.

[15] ‘Each Party shall ensure that at the time of submission of requests to participate or at the time of submission of tenders, procuring entities do not require suppliers to submit all or part of the supporting evidence that they are not in one of the situations in which a supplier may be excluded and that they fulfil the conditions for participation unless this is necessary to ensure the proper conduct of the procurement.’

[16] ‘Each Party shall ensure that where its procuring entities require a supplier, as a condition for participation in a covered procurement, to demonstrate prior experience they do not require that the supplier has such experience in the territory of that Party.’

[17] Procurement Policy Note 06/21: Taking account of Carbon Reduction Plans in the procurement of major government contracts (15 Jun 2021), available at https://www.gov.uk/government/publications/procurement-policy-note-0621-taking-account-of-carbon-reduction-plans-in-the-procurement-of-major-government-contracts.

[18] Office for Artificial Intelligence, Guidelines for AI procurement (8 Jun 2020), available at https://www.gov.uk/government/publications/guidelines-for-ai-procurement.

[19] Subject to changes derived from the Government’s response to the green paper consultation, above (n 9).

[20] Available at https://www.gov.uk/government/publications/public-procurement-provisional-common-framework.

3 priorities for policy-makers thinking of AI and machine learning for procurement governance

138369750_9f3b5989f9_w.jpg

I find that carrying out research in the digital technologies and governance field can be overwhelming. And that is for an academic currently having the luxury of full-time research leave… so I can only imagine how much more overwhelming it must be for policy-makers thinking about the adoption of artificial intelligence (AI) and machine learning for procurement governance, to identify potential use cases and to establish viable deployment strategies.

Prioritisation seems particularly complicated, as managing such a significant change requires careful planning and paying attention to a wide variety of potential issues. However, getting prioritisation right is probably the best way of increasing the chances of success for the deployment of digital technologies for procurement governance — as well as in other areas of Regtech, such as financial supervision.

This interesting speech by James Proudman (Executive Director of UK Deposit Takers Supervision, Bank of England) on 'Managing Machines: the governance of artificial intelligence', precisely focuses on such issues. And I find the conclusions particularly enlightening:

First, the observation that the introduction of AI/ML poses significant challenges around the proper use of data, suggests that boards should attach priority to the governance of data – what data should be used; how should it be modelled and tested; and whether the outcomes derived from the data are correct.

Second, the observation that the introduction of AI/ML does not eliminate the role of human incentives in delivering good or bad outcomes, but transforms them, implies that boards should continue to focus on the oversight of human incentives and accountabilities within AI/ML-centric systems.

And third, the acceleration in the rate of introduction of AI/ML will create increased execution risks during the transition that need to be overseen. Boards should reflect on the range of skill sets and controls that are required to mitigate these risks both at senior level and throughout the organisation.

These seem to me directly transferable to the context of procurement governance and the design of strategies for the deployment of AI and machine learning, as well as other digital technologies.

First, it is necessary to create an enabling data architecture and to put significant thought into how to extract value from the increasingly available data. In that regard, there are two opportunities that should not be missed. One concerns the treatment of procurement datasets as high-value datasets for the purposes of the special regime of the Open Data Directive (for more details, see section 6 here), which will require careful consideration of the content and level of openness of procurement data in the context of the domestic transpositions that need to be in place by 17 July 2021. The other, related opportunity concerns the implementation of the new rules on eForms for procurement data publications, which Member States need to adopt by 14 November 2022. Building on the data architecture that will result from both sets of changes—which should be coordinated—will allow for the deployment of data analytics and machine learning techniques. The purposes and goals of such deployments also need to be considered carefully, as well as their potential implications.

Second, it seems clear that the changes in the management of procurement data and the quick development of analytics that can support procurement decision-making pile some additional training and upskilling needs on the already existing (and partially unaddressed?) current challenges of full consolidation of eProcurement across the EU. Moreover, it should be clear that there is no such thing as an objective and value neutral implementation of technological governance solutions and that all levels of accountability need to be provided with adequate data skills and digital literacy upgrades in order to check what is being done at the technical level (for crystal-clear discussion, see van der Voort et al, 'Rationality and politics of algorithms. Will the promise of big data survive the dynamics of public decision making?' (2019) 36(1) Government Information Quarterly 27-38). Otherwise, governance mechanism would be at risk of failure due to techno-capture and/or techno-blindness, whether intended or accidental.

Third, there is an increasing need to manage change and the risks that come with it. In a notoriously risk averse policy field such as procurement, this is no minor challenge. This should also prompt some rethinking of the way the procurement function is organised and its risk-management mechanisms.

Addressing these priorities will not be easy or cheap, but these are the fundamental building blocks required to enable the public procurement sector to benefit from the benefits of digital technologies as they mature. In consultancy jargon, these are the priorities to ‘future-proof’ procurement strategies. Will they be adopted?

Postscript

It is worth adding that, in particular the first and second issues, lend themselves to strong collaborations between policy-makers and academics. As rightly pointed out by Pencheva et al, 'Big Data and AI – A transformational shift for government: So, what next for research?' (2018) Public Policy and Administration, advanced access at 16:

... governments should also support the efforts for knowledge creation and analysis by opening up their data further, collaborating with – and actively seeking inputs from – researchers to understand how Big Data can be utilised in the public sector. Ultimately, the supporting field of academic thought will only be as strong as the public administration practice allows it to be.

Public procurement digitalisation: A step forward or two steps back? [guest post by Dr Kirsi-Maria Halonen]

In this guest post, Dr Kirsi-Maria Halonen offers some exploratory thoughts on the digitalisation of public procurement, its difficulties and some governance and competition implications. This post is based on the presentation she gave at a Finnish legal research seminar “Oikeustieteen päivät”, Aalto University, on 28-29 September 2019.

Digitalisation of procurement - background and goals

Digitalisation and e-procurement are considered to enhance the efficiency of the procurement process in the EU’s internal market. In line with the European Commission’s 2017 Procurement Strategy, procurement digitalisation can unlock better and faster transparency across the internal market, thus ensuring the possibility for economic operators to become aware of business opportunities, the facilitation of access to public tenders and the dissemination of information on the conditions of the award of public contracts.

Beyond mere transparency gains, procurement digitalisation is also expected to Increase the integrity of the awarding process and the public officials involved, thus fostering corruption prevention and good administrative practices. Finally, digitalisation is also expected to open new, more efficient monitoring possibilities both before and after contract execution, as well as the deployment of advanced big data analytics.

Directive 2014/24/EU and procurement digitalisation

Digitalisation and e-procurement are some of the main goals of Directive 2014/24/EU. Since October 2018, these rules impose the mandatory use of electronic communications throughout the whole public contract award procedure (eCommunication), the submission of tenders in electronic form (eSubmission) and created detailed rules for procedures meant solely for eProcurement, as well as simplified information exchange mechanisms (such as the ESPD) to facilitate electronic processing of procurement information.

Although the digital requirements in the Directive do not yet cover pre-award market consultations or post-award contracts and contract amendments, there are some trends to indicate that these may be the next areas of digitalisation of procurement.

State of the art at Member State level

Many Member States have taken digitalisation and transparency in public procurement even further than the requirements of Directive 2014/24/EU. Many contracting authorities use eProcurement systems for the management of the entire life-cycle of the tendering process. In Finland, there is now consolidated experience with not only an eProcurement system, but also with an open access Government spend database. Similarly, Portugal, Spain, Italy, Slovakia and Poland have also created open access contract registers for all public contracts and contract amendments.

Additionally, many Member States are committed to wider transparency outside the procurement procedures. For example, there is an emerging practice of publication of pre-tendering market consultation documents or audio/video meeting records. It is also increasingly common to provide open access to contract performance documents, such as bills, payments and performance acceptance (eg the UK national action plan on open contracting).

Concerns and opportunities in the digitalisation of procurement

Given the current trends of development of digital procurement, it is necessary to reflect not only on the opportunities that the roll-out of these technologies creates, but also some concerns that arise from increased transparency and the implications of this different mode of procurement governance. Below are some thoughts on four interrelated dimensions: corruption, SME participation, adoption of blockchain-base and algorithmic tools, and competition for public contracts.

Corruption

Public Procurement and other commercial relationships (eg real estate development) between public and private sector are most vulnerable to corruption (as repeatedly stressed by the OECD, Transparency International, Finnish National Bureau of Investigation, etc). In that regard, it seems clear that the digitalisation of procurement and the increased transparency it brings with it can prevent corruption and boost integrity. Companies across the EU become aware of the contract award, so there is less room for national arrangements and protectionism. Digitalisation can make tendering less bureaucratic, thus lessening the need and room for bribes. eProcurement can also prevent (improper) direct communication between the contracting authority and potential tenderers. Finally, the mere existence of electronic documentation makes it easier to track and request documents at a later stage: illegal purchases are not that easy to “hide”.

Yet, even after the roll-out of electronic documentation and contract registers, there will remain issues such as dealing with receipts or fabricating needs for additional purchases, which are recurring problems in many countries. Therefore, while digitalisation can reduce the scope and risk of corruption, it is no substitute for other checks and balances on the proper operation of the procurement function and the underlying expenditure of public funds.

SME participation

One of the goals of Directive 2014/24/EU was to foster procurement digitalisation to facilitate SME participation by making tendering less bureaucratic . However, tendering is still very bureaucratic. Sometimes it is difficult for economic operators to find the “right” contracts, as it requires experience not only in identifying, but also in interpreting contract notices. Moreover, the effects of digitalisation are still local due to language barriers – eg in Finland, tendering documents are mostly in Finnish.

Moreover, the uncertainty of winning and the need to put resources into tendering are the main reasons for not-bidding by SMEs (Jääskeläinen & Tukiainen, 2018); and this is not resolved by digital tools. On the contrary, and in a compounding manner, SMEs can be disadvantaged in eProcurement settings. SMEs rarely can compete in price, but the use of e-procurement systems "favours" the use of a price only criterion (in comparison to price-quality-ratio) as quality assessment requires manual assessment of tenders. The net effect of digitalisation on SME participation is thus less than clear cut.

Blockchain-based and algorithmic tools

The digitalisation of procurement creates new possibilities for the use of algorithms: it opens endless possibilities to implement algorithmic test for choosing “the best tender” and to automate the procurement of basic products and services; it allows for enhanced control of price adjustments in e-catalogues (which currently requires manual labor); and it can facilitate monitoring: eg finding signs for bid rigging, cartels or corruption. In the future, transparent algorithms could also attack corruption by minimizing or removing human participation from the course of the procurement procedure.

Digitalisation also creates possibilities for using blockchain: for example, to manage company records, official statements and documents, which can be made available to all contracting authorities across EU. However, this also creates risks linked to eg EU wide blacklists: a minor infringement in one Member State could lead to the economic operator’s incapability of participating in public tenders throughout the EU.

The implications of the adoption of both algorithmic and blockchain-based tools still requires further thought and analysis, and this is likely to remain a fertile area for practical experimentation and academic debate in the years to come.

Competition

Open public contract registers have become a part of public procurement regime in EU Member States where corruption is high or with a tradition of high levels of public sector transparency. The European Commission is pushing for their creation in all EU jurisdictions as part of its 2017 Procurement Strategy. These contract registers aim to enhance integrity of the procurement system and public official and to allow public scrutiny of public spending by citizens and media.

However, these registers can facilitate collusive agreements. Indeed, easier access to detailed tendering information facilitates monitoring existing cartels by its members: it provides means to make sure ”cartel discipline” is being followed. Moreover, it may facilitate the establishment of new cartels or lead to higher / not market-based pricing without specific collusive agreements.

Instead of creating large PDF-format databases of scanned public contracts, the European Commission indeed encourages Member States to create contract registers with workable datasets (user friendly, open, downloadable and machine-readable information on contracts and especially prices and parties of the contract). This creates huge risks of market failure and tendering with pricing that is not based on the market prices. It thus requires further thought.

Conclusions

Digitalisation has and is transforming public procurement regime and procedures. It is usually considered as a positive change: less bureaucracy, enhanced efficiency, better and faster communication and strengthening integrity of public sector. However, digitalisation keeps challenging the public procurement regime through eg automated processes and production of detailed data - leaving less room for qualitative assessments. One can wonder whether this contributes to the higher-level objectives of increasing SME participation and generating better value for money.

Digitalisation brings new tools for monitoring contracting authorities and to detect competition distortions and integrity failures. However, there is a clear risk in providing “too much” and “too detailed” pricing and contract information to the market operators – hence lowering the threshold of different collusive practices. It is thus necessary to reconsider current regulatory trends and to perhaps develop a more nuanced regulatory framework for the transparency of procurement information in a framework of digitalised governance.

Kirsi.jpeg

Guest blogger

Dr Kirsi-Maria Halonen is a Doctor of Laws and Adjunct Professor, Senior Lecturer in Commercial Law at University of Lapland. She is also a current Member of the European Commission’s Stakeholders Expert Group on Public Procurement (SEGPP, E02807), the Research Council at Swedish Competition Authority, the Finnish Ministry of Finance national PP strategy working group (previously also national general contract terms for PP (JYSE) working group), the Finnish Public Procurement Association, of which she is a board member and previous chair, and the European Procurement Law Group (EPLG).

In addition to public procurement law, Kirsi-Maria is interested in contract law, tort law, corruption and transparency matters as well as state aid rules. She is the author of several articles (both in English and in Finnish) and a few books (in Finnish). Most recently, she has co-edited Transparency in EU Procurements. Disclosure within Public Procurement and during Contract Execution, vol 9 European Procurement Law Series (Edward Elgar, 2019), together with Prof R Caranta and Prof A Sanchez-Graells.

Reflecting on data-driven and digital procurement governance through two elephant tales

Elephants in a 13th century manuscript. THE BRITISH LIBRARY/ROYAL 12 F XIII

Elephants in a 13th century manuscript. THE BRITISH LIBRARY/ROYAL 12 F XIII

I have uploaded to SSRN the new paper ‘Data-driven and digital procurement governance: Revisiting two well-known elephant tales‘ (21 Aug 2019), which I will present at the Annual Conference of the IALS Information Law & Policy Centre on 22 November 2019.

The paper condenses my current thoughts about the obstacles for the deployment of data-driven digital procurement governance due to a lack of reliable quality procurement data sources, as well as my skepticism about the potential for blockchain-based solutions, including smart contracts, to have a significant impact in public procurement setting where the public buyer is extremely unlikely to give up centralised control of the procurement function. The abstract of the paper is as follows:

This paper takes the dearth of quality procurement data as an empirical point of departure to assess emerging regulatory trends in data-driven and digital public procurement governance and, in particular, the European Commission’s ambition for the single digital procurement market. It resorts to two well-known elephant tales to send a message of caution. It first appeals to the image of medieval bestiary elephants to stress the need to develop a better data architecture that reveals the real state of the procurement landscape, and for the European Commission to stop relying on bad data in the Single Market Scoreboard. The paper then assesses the promises of blockchain and smart contracts for procurement governance and raises the prospect that these may be new white elephants that do not offer significant advantages over existing sophisticated databases, or beyond narrow back-office applications—which leaves a number of unanswered questions regarding the desirability of their implementation. The paper concludes by advocating for EU policymakers to concentrate on developing an adequate data architecture to enable digital procurement governance.

If nothing else, I hope the two elephant tales are convincing.

Oracles as a sub-hype in blockchain discussions, or how my puppy helps me get to 10,000 steps a day

Photo: Rob Alcaraz/The Wall Street Journal.

Photo: Rob Alcaraz/The Wall Street Journal.

The more I think about the use of blockchain solutions in the context of public procurement governance—and, more generally, of public services delivery—the more I find that the inability for blockchain technology to reliably connect to the ‘real world’ is bound to restrict any potentially useful applications to back-office functions and the procurement of strictly digital assets.

This is simply because blockchain can only generate its desirable effects of tamper-evident record-keeping and automated execution of smart contracts built on top of it to the extent that it does not require off-chain inputs. Blockchain is also structurally incapable of generating off-chain outputs by itself.

This is increasingly widely-known and is generating a sub-hype around oracles—which are devices aimed at plugging blockchains to the ‘real world’, either by feeding the blockchain with data, or by outputting data from the blockchain (as discussed eg here). In this blog post, I reflect on the minimal changes that I think the development of oracles is likely to have in the context of public procurement governance.

Why would blockchain be interesting in this context?

Generally, the potential for the use of blockchain and blockchain-enabled smart contracts to improve procurement governance is linked to the promise that it can help prevent corruption and mistakes through the automation of decision-making through the procurement process and the execution of public contracts and the immutability (rectius, tamper-evidence) of procurement records. There are two main barriers to the achievement of such improvements over current processes and governance mechanisms. One concerns transactions costs and information asymmetries (as briefly discussed here). The other concerns the massive gap between the virtual on-chain reality and the off-chain real world—which oracles are trying to bridge.

The separation between on-chain and off-chain reality is paramount to the analysis of governance issues and the impact blockchain can have. If blockchain can only displace the focus of potential corrupt or mistaken intervention—by the public buyer, or by public contractors—but not eliminate such risks, its potential contribution to a revolution of procurement governance certainly reduces in various orders of magnitude. So it is important to assess the extent to which blockchain can be complemented with other solutions (oracles) to achieve the elimination of points of entry for corrupt or mistaken activity, rather than their displacement or substitution.

Oracle’s vulnerabilities: my puppy wears my fitbit

In simple terms, oracles are data interfaces that connect a blockchain to a database or a source of data (for a taxonomy and some discussion, see here). This makes them potentially unreliable as (i) the oracle can only be as good as the data it relies on and (ii) the oracle can itself be manipulated. There are thus, two main sources of oracle vulnerability, which automatically translate into blockchain vulnerability.

First, the data can be manipulated—like when I prefer to sit and watch some TV rather than go for a run and tie my fitbit to my puppy’s collar so that, by midnight, I have still achieved my 10,000 daily steps.* Second, the oracle itself can be manipulated because it is a piece of software or hardware that can be tampered with, and perhaps in a way that is not readily evident and which uncovering requires some serious IT forensics—like getting a friend to crack fitbit’s code and add 10,000 daily steps to my database without me even needing to charge my watch.**

Unlilke when these issues concern the extent to which I lie to myself about my healthy lifestyle, these two vulnerabilities are highly problematic from a public governance perspective because, unless the data used in the interaction with the blockchain is itself automatically generated in a way that cannot be manipulated (and this starts to point at a mirror within a mirror situation, see below), the effect of implementing a blockchain plus oracle simply seems to be to displace the governance focus where controls need to be placed towards the source of the data and the devices used to collect it.

But oracles can get better! — sure, but only to deal with data

The sub-hype around oracles in blockchain discussions basically follows the same trend as the main hype around blockchain. The same way it is assumed that blockchain is bound to revolutionise everything because it will get so much better than it currently is, there are emerging arguments about the almost boundless potential for oracles to connect the real world to the blockchain in so much better ways. I do not have the engineering or futurology credentials necessary to pass judgement on this, but it seems to me plain to see that—unless we want to add an additional layer about robotics (and pretty evolved robotics at that), so that we consider blockchain+oracle+robot solutions—any and all advances will remain limited to improving the way data is generated/captured and exploited within and outside the blockchain.

So, for everything that is not data-based or data-transformable (such as the often used example of event tickets, which in the end get plugged back to a database that determines their effects in the real world)—or, in other words, where moving digital tokes around does not generate the necessary effects in the real world—even much advanced blockchain+oracle solutions are likely to remain of limited use in the context of procurement and the delivery of public services. Not because the applications are not (technically) possible, but because they generate governance problems that merely replace the current ones. And the advantage is not necessarily obvious.

How far can we displace governance problems and still reap some advantages?

What do I mean that the advantage is not necessarily obvious? Well, imagine the possibility of having a blockchain+oracle control the inventory of a given consumable, so that the oracle feeds information into the blockchain about the existing level of stock and about new deliveries made by the supplier, so that automated payments are made eg on a per available unit basis. This could be seen as a possible application to avoid the need for different ways of controlling the execution of the contract—or even for the need to procure the consumable in the first place, if a smart contract in the blockchain (the same, or a separate one) is automatically buying them on the basis of a closed system (eg a framework agreement or dynamic purchasing system based on electronic catalogues) or even in the ‘open market’ of the internet. Would this not be advantageous from a governance perspective?

Well, I think it would be a matter of degree because there would still need to be a way of ensuring that the oracle is not tampered with and that what the oracle is capturing reflects reality. There are myriad ways in which you could manipulate most systems—and, given the right economic incentives, there will always be attempts to manipulate even the most sophisticated systems we may want to put in place—so checks will always be needed. At this stage, the issue becomes one of comparing the running costs of the system. Unless the cost of the blockchain+oracle+new checks (plus the cybersecurity needed to keep them up and properly running) is lower than the cost of existing systems (including inefficiencies derived from corruption and mistakes), there is no obvious advantage and likely no public interest in the implementation of solutions based on these disruptive technologies.

Which leads me to the new governance issue that has started to worry me: the control of ‘business cases’ for the implementation of blockchain-based solutions in the context of public procurement (and public governance more generally). Given the lack of data and the difficulty in estimating some of the risks and costs of both the existing systems and any proposed new blockchain solutions, who is doing the math and on the basis of what? I guess convincingly answering this will require some more research, but I certainly have a hunch that not much robust analysis is going on…

_____

* I do not have a puppy, though, so I really end up doing my own running…

** I am not sure this is technically doable, but hopefully it works for the sake of the example…

An incomplete overview of (the promises of) GovTech: some thoughts on Engin & Treleaven (2019)

I have just read the interesting paper by Z Engin & P Treleaven, 'Algorithmic Government: Automating Public Services and Supporting Civil Servants in using Data Science Technologies' (2019) 62(3) The Computer Journal 448–460, https://doi.org/10.1093/comjnl/bxy082 (available on open access). The paper offers a very useful, but somehow inaccurate and slightly incomplete, overview of data science automation being deployed by governments world-wide (ie GovTech), including the technologies of artificial intelligence (AI), Internet of Things (IoT), big data, behavioral/predictive analytics, and blockchain. I found their taxonomy of GovTech services particularly thought-provoking.

Source: Engin &amp; Treleaven (2019: 449).

Source: Engin & Treleaven (2019: 449).

In the eyes of a lawyer, the use of the word ‘Government’ to describe all these activities is odd, in particular concerning the category ‘Statutes and Compliance’ (at least on the Statutes part). Moving past that conceptual issue—which reminds us once more of the need for more collaboration between computer scientist and social scientists, including lawyers—the taxonomy still seems difficult to square with an analysis of the use of GovTech for public procurement governance and practice. While some of its aspects could be subsumed as tools to ‘Support Civil Servants’ or under ‘National Public Records’, the transactional aspects of public procurement and the interaction with public contractors seem more difficult to place in this taxonomy (even if the category of ‘National Physical Infrastructure’ is considered). Therefore, either additional categories or more granularity is needed in order to have a more complete view of the type of interactions between technology and public sector activity (broadly defined).

The paper is also very limited regarding LawTech, as it primarily concentrates on online dispute resolution (ODR) mechanisms, which is only a relatively small aspect of the potential impact of data science automation on the practice of law. In that regard, I would recommend reading the (more complex, but very useful) book by K D Ashley, Artificial Intelligence and Legal Analytics. New Tools for Law Practice in the Digital Age (Cambridge, CUP, 2017).

I would thus recommend reading Engin & Treleaven (2019) with an open mind, and using it more as a collection of examples than a closed taxonomy.

Governance, blockchain and transaction costs

Bitcoin Traces / Martin Nadal (ES)

Bitcoin Traces / Martin Nadal (ES)

Blockchain is attracting increasing attention as a new technology capable of ‘revolutionising’ governance, both in the private or public sector. In simple terms, blockchain is seen as an alternative to the way information is (securely) stored and rules are enforced, regardless of whether those rules are agreed in a contract, or result from legislation or administrative decision-making.

Some examples include the governance of illegal agreements to distort competition (cartels) (see eg this paper by Thibault Shrepel), or the management of public procurement (eg in this paper by Hardwick, Akram and Markantonakis, or these thoughts by Bertrand Maltaverne). These examples explore how the technology allows for the creation of ‘self-executing’ sets of rules that would be capable of overcoming so far intractable governance problems (mostly, about trust: eg among the cartellists, or in public officials).

This could create opposite effects in the governance of public procurement. For instance, this could make the detection and correction of bid rigging very difficult (if not impossible) or, conversely, allow for a corruption-free procurement architecture. Therefore, the impact of the technology (in principle neutral) on existing governance systems can ultimately be seen as an ‘arms race’ between the private and public sector as, ultimately, the one that gets ahead will be able to exploit the technology to its advantage.

This justifies some calls for both investment in new technologies by the public sector (as the private sector has its own incentives for investment), and regulation of private (and public) use of the technology. I have no objection to either of these recommendations. However, I think there is an important piece of the puzzle that tends to go missing in this type of analysis.

Indeed, most of this discussion brushes over the important limitations of smart contracts. These limitations are both linked to the fact that the computational logic underpinning smart contracts can only operate on the basis of complete information/rules, and that the computing power necessary to implement smart contracts can currently only process extremely simple contracts.

The latter issue may be dismissed as a mere ‘a matter of time’, but given that it has been estimated that it is currently only possible to create a blockchain-based procurement process capable of holding 700-word-long tender documentation (Hardwick, Akram and Markantonakis, 2018: 6), there seems to be a very long road ahead, even accepting Moore’s Law on the growth of computational power.

This first issue, though, is more difficult to set aside. As rightly stressed by Davidson, De Filippi and Potts in their ‘must read’ paper, ‘the obvious problem is that blockchains only work on complete contracts, whereas most in-the-world firms ... are largely (entirely?) made of incomplete contracts'; ‘a blockchain is an economic world of complete contracts’ (2016: 9).

In my view, this should raise significant doubts as to the likely extent of the ‘revolution’ that blockchain can create in complex settings where the parties structurally face incomplete information. Procurement is clearly one such setting. There are a few reasons for this, my top three being that:

  • First, the structural incompleteness of information in a setting where the public buyer seeks to use the public tender as a mechanism of information revelation cannot be overstated. If it is difficult for contracting authorities to design ‘sufficiently objective’ technical specifications and award criteria/evaluation methods, the difficulties of having to do so under the strictures of computational logic are difficult to imagine.

  • Second, the volume of entirely digital procurement (that is, the procurement of entirely digital or virtual goods and services) is bound to remain marginal, which creates the additional problem of connecting the blockchain to the real world, with all the fallibility and vulnerability that so-called oracles bring with them.

  • Third, blockchain technology in itself creates an additional layer of transaction costs—at least at the stage of setting up the system and ‘migrating’ to a blockchain-based procurement mechanism. Bearing in mind the noticeable and pervasive difficulties in the much simpler transition to e-procurement, this also seems difficult to overstate.

Therefore, while there will clearly be improvements in specific (sub)processes that can be underpinned by blockchain instead of other cryptographic/cybersecurity solutions, I remain quite skeptical of a blockchain-based revolution of procurement governance. It may be that I still have not advanced enough in my research to identify the 'magic technological solution’ that can do away with transaction costs, so any pointers would be most appreciated.

Procurement governance and complex technologies: a promising future?

Thanks to the UK’s Procurement Lawyers’ Association (PLA) and in particular Totis Kotsonis, on Wednesday 6 March 2019, I will have the opportunity to present some of my initial thoughts on the potential impact of complex technologies on procurement governance.

In the presentation, I will aim to critically assess the impacts that complex technologies such as blockchain (or smart contracts), artificial intelligence (including big data) and the internet of things could have for public procurement governance and oversight. Taking the main risks of maladministration of the procurement function (corruption, discrimination and inefficiency) on which procurement law is based as the analytical point of departure, the talk will explore the potential improvements of governance that different complex technologies could bring, as well as any new governance risks that they could also generate.

The slides I will use are at the end of this post. Unfortunately, the hyperlinks do not work, so please email me if you are interested in a fully-accessible presentation format (a.sanchez-graells@bristol.ac.uk).

The event is open to non-PLA members. So if you are in London and fancy joining the conversation, please register following the instructions in the PLA’s event page.

Are English Universities likely to stop having to comply with EU public procurement law?

One of the elements implicit in the on-going discussion about higher education reform in England concerns the extent to which changes in the funding and governance structure of HEFCE (to be transformed into the Office for Students, or any other format that results from the consultation run by BIS) can free English universities from their duty to comply with EU public procurement law. 

The issue is recurring in the subsequent waves of higher education reform in England, and the same debate arouse last summer following BIS statements that the most recent reform (lifting the cap on student numbers) would relieve English universities of their duty to comply with EU public procurement law (see discussion here).

Overall, then, there is a clear need to clarify to what extent English universities are actually and currently obliged to comply with EU public procurement rules, both as buyers and as providers of services. That analysis can then inform the extent to which in the future English universities are likely to remain under a duty to comply with EU public procurement rules.

In this study we provide an up-to-date assessment of situations in which universities are bound by public procurement rules, as well as the combined changes that market-based university financing mechanisms can bring about in relation to the regulation of university procurement and to the treatment of the financial support they receive under the EU State aid rules. National differences in funding schemes are likely to trigger different answers in different EU jurisdictions. This study uses the situation of English universities as a case study.
The first part focuses on the role of universities as buyers. The traditional position has been to consider universities bound by EU public procurement rules either as state authorities, or because they receive more than 50% public funding. In the latter case, recent changes in the funding structure can create opportunities for universities to free themselves from compliance with EU public procurement rules.
In the second part, we assess the position of universities as providers. Here the traditional position has been that the State can directly mandate universities to conduct teaching and research activities. However, new EU legislation contains specific provisions about how and when teaching and research need to be procured if they are of an economic nature. Thus, accepting the exclusion of university services from procurement requirements as a rule of thumb is increasingly open to legal challenge.
Finally, the study assesses if and in how far universities can benefit from exemptions for public-public cooperation or in-house arrangements either as sellers or buyers. 
The full paper is available on SSRN: http://ssrn.com/abstract=2692966.

We have submitted our piece of research to BIS as part of the consultation on the green paper. We hope that our research and the insights it sheds can inform the discussion on the new mechanisms for the allocation of the teaching grant to English universities (and particularly the discussion around Q18 of the consultation).

Further thoughts on the competition implications of public contract registries: rebuttal to Telles

Some 10 days ago, Dr Pedro Telles and I engaged in another of our procurement tennis games. This time, the topic of contention is the impact of public contract registers on competition. I published a first set of arguments (here) and Pedro replied (here) mainly stressing that I had not paid enough attention to the potential upsides of such registers. 

Pedro advocated some potential sources of economic benefits derived from the use of public contract registers aimed at full transparency of tender and post-award procurement documentation, of which I would pick: 1) reduced opportunities for price arbitrage and 2) more scope for antitrust intervention by competition authorities possessing better data on what is going on in procurement markets. His arguments are well developed and can be seen as attractive. However, on reflection, there are still reasons why they do not necessarily work. In this post, I address these two issues and explain why I am still sceptical that they can result in any actual economic upsides. I am expecting Pedro to follow up with more arguments, which would be certainly welcome.

1) What about the 'single market theory = law of one price' approach?
The discussion on price arbitrage implicitly rests on the economic 'law of one price' whereby, in simple terms, a specific good should be traded at a single price in all locations. However, that 'economic law' rests on a large number of assumptions, which are particularly fit to commodity markets and ill suited to complex contracts for goods, or most definitely for services. 

In fact, even in highly competitive markets for commoditised products, the law of one price does not hold, at least if conceived in strong terms (ie strictly one price for a given good) instead of relaxing it to require a convergence or clustering of prices [for an interesting empirical paper stressing these insights, see K Graddy, 'Testing for Imperfect Competition at the Fulton Fish Market' (1995) 26(1) The RAND Journal of Economics 75-92]. 

Thus, focussing on arbitrage issues for anything other than very homogeneous commodities traded under standard contract clauses can fall foul of the due recognition of the assumptions underlying the 'law of one price'. Pedro acknowledges this: "yes, I am talking about a commodity, but then a lot of public procurement is made around commodities, including oil". On this point, however, I think data does not support his views.

According to the 2011 PwC-London Economics-Ecorys study for the European Commission 'Public procurement in Europe-Cost and effectiveness', commodities and manufactured goods only account for about 10% in value and 14% in number of procurement procedures subjected to the EU rules (see here page 45). Thus, the issue of price arbitrage is certainly not of first magnitude when the effects of public contract registers are assessed from an economic perspective.

(c) Anderson for eQuest
2) What about more intervention by competition authorities based on better (big) data?
On this point, Pedro and I agree partially. It is beyond doubt that, as he puts it, there are "potential upsides of having more data available in terms of cartel fighting. What can be done when reams and reams of contract data are available? You can spot odd behaviours. For example, you can corroborate a whistleblower account and you can then check if certain collusive practice/tactic is happening in other sectors as well." That is why, on my original post, I advocated for "[o]versight entities, such as the audit court or the competition authority, [to] have full access" to public contract registers.

However, as I also suggested (probably not in the clearest terms), in order to enable competition law enforcement on the basis of better data, there is no need for everyone to have (unlimited) access to that data. The only agent that needs access is the competition authority. More importantly, indiscriminate disclosure is not technically necessary, particularly when public contract registries are electronic and can be designed around technical devices giving differentiated access to information to different stakeholders.

This is an important issue. In a different but comparable context, disclosure obligations in the field of securities and financial regulation have been criticised for failing to address their excessive rigidity in certain multi-audience scenarios, where investors and competitors can access the same information and, consequently, firms have conflicting incentives to disclose and not to disclose specific bits of commercially sensitive information [for a very interesting discussion, see S Gilotta, 'Disclosure in Securities Markets and the Firm's Need for Confidentiality: Theoretical Framework and Regulatory Analysis' (2012) 13(1) European Business Organization Law Review 45-88].

In that setting, selective disclosure of sensitive information has been considered the adequate tool to strike a balance of interest between the different stakeholders wanting access to the information, and this is becoming a worldwide standard with a significant volume of emerging best practices [eg Brynn Gilbertson and Daniel Wong, 'Selective disclosure by listed issuers: recent “best practice” developments', Lexology, 9 Sept 2014].

Therefore, by analogy (if nothing else), I still think that 
Generally, what is needed is more granularity in the levels of information that are made accessible to different stakeholders. The current full transparency approach whereby all information is made available to everyone falls very short from the desired balance between transparency and competition goals of public procurement. A system based on enabling or targeted transparency, whereby each stakeholder gets access to the information it needs for a specific purpose, is clearly preferable.

Why are public contracts registers problematic?

This past week, I had the pleasure and honour of starting my participation in the European Commission Stakeholder Expert Group on Public Procurement (PPEP). The first batch of discussions  revolved, firstly, around the use of the best price quality ratio (BPQR) award criterion and, secondly, around the use of transparency tools such as public contract registers. 

This second topic is of my particular interest, so I have tried to push the discussion a step forward in a document circulated to the PPEP Members. Given the general nature of the discussion document, I thought it could be interesting to post it here. Any comments will be most welcome and will help enrich the views presented to the European Commission in the next meeting. Thank you for reading and commenting.

Centralised Procurement Registers and their Transparency Implications—Discussion Non-Paper for the European Commission Stakeholder Expert Group on Public Procurement ~ Dr Albert Sanchez-Graells[1]

Background
In its efforts to increase the effectiveness of EU public procurement law in practice and to steer Member States towards the mutual exchange and eventual adoption of best practices,[2] the European Commission has identified the emerging trend of creating public contracts registers as an area of increasing interest.[3] Such registers go beyond the well-known electronic portals of information on public contract opportunities, such as TED[4] at EU level or Contracts Finder in the UK,[5] and aim to publish very detailed tender and contractual information, which in some cases include aspects of the competition generated prior to the award of the contract (such as names of the undertakings that submitted tenders) and the actual contractual documents signed by the parties. Such registers exist at least in Portugal,[6] Italy[7] and Slovakia.[8] The European Commission is interested in assessing the benefits and risks that such public contracts registers generate, particularly in terms of transparency of public tendering and the subsequent management of public contracts. This discussion non-paper aims to assess such benefits and risks and to sketch some proposals for risk mitigation measures.

Why are public contract registries created?
Traditional registers of contract opportunities are fundamentally based on transaction cost theory insights and aim to reduce the search costs that undertakings face in trying to identify opportunities to supply the public sector. By making the information readily available, contracting authorities expect to receive expressions of interest and/or offers from a larger number of undertakings, thus increasing competition for public contracts and reducing the information asymmetries that affect contracting authorities themselves. In the end, that sort of pre-award transparency mechanism aims at enabling the contracting authority to benefit from competition. It also creates the additional benefit of avoiding favouritism and corrupt practices in the selection of public suppliers and, in the context of the EU’s internal market, supports the anti-discrimination agenda embedded in the basic fundamental freedoms of movement of goods, services and capital through pan-European advertisement.

The justification for ‘advanced’ public contracts registers that include post-award transparency mechanisms is more complex and, in short, this type of registers is created for a number of reasons that mainly include objectives at two different levels:

1. At a general level, these registers aim at
  • Reacting to perceived shortcomings in public governance, particularly in the aftermath of corruption scandals, or as part of efforts to strengthen public administration processes
  • Complementing ‘traditional’ public audit and oversight mechanisms through enhanced access to information by stakeholders and civil society organisations, as well as enabling more intense scrutiny by the press, in the hope of ‘private-led’ oversight and audit. The possibilities that digitisation and big data create in this area of public governance are a significant driver or steer to the development of these registers.[9]
2. At a specific level, these registers aim at
  • Facilitating contract management oversight and creating an additional layer of public exposure of contract-related decision-making, thus expanding the scope of procurement transparency beyond the award phase
  • Facilitating private enforcement of public procurement rules by allowing interested parties to prompt administrative and/or judicial review of specific procurement decisions,[10] both pre-award and during the execution phase
Generally, then, these additional transparency mechanisms are not intended to foster competition. Their main goal and justification is to preserve the integrity of public contract administration and to increase the robustness of anticorruption tools by facilitating social or private oversight. They significantly increase the levels of transparency already achieved through pre-award disclosure mechanisms and, in simple terms, they aim at creating full transparency of public procurement and public contract management, basically for the purposes of legitimising public expenditure by means of increased (expected) accountability as a result of such full transparency and tougher oversight.

Why are public contract registries problematic from a competition perspective?
Public contract registries are problematic precisely due to the levels of transparency they create. Economic theory has conclusively demonstrated that the levels of transparency created by public procurement rules and practices (such as these registers) facilitate collusion and anticompetitive behaviour between undertakings, thus eroding (and potentially negating) the benefits contracting authorities can obtain from organising tenders for public contracts.[11] This is an uncontroversial finding that led the OECD to stress that “[t]he formal rules governing public procurement can make communication among rivals easier, promoting collusion among bidders … procurement regulations may facilitate collusive arrangements”.[12]

The specific reasons why and conditions under which increased transparency facilitates collusion are beyond the scope of this discussion non-paper, but suffice it to stress here that transparency will be particularly pernicious when it allows undertakings that are already colluding to identify the detailed conditions under which they did participate in a particular bid or refrained from participating (by, for instance, disclosing the names of participating tenderers and the specific conditions of the winning tender).[13] Moreover, conditions of full transparency are not only problematic in relation to already existing cartels, but they are also troublesome regarding the creation of new cartels because increased transparency alters the incentives to participate in bid rigging arrangements.[14]

Furthermore, full transparency can also damage competition in industries with strong dominant undertakings. In those settings, transparency may not lead to cartelisation, but it can facilitate exclusionary strategies by the dominant undertaking by allowing them to focus exclusionary practices (such as predatory pricing) in markets or segments of the market where it detects entry by new rivals or innovative tenderers. Even in cases where collusion or price competition may not be a prime issue, full transparency can create qualitative distortions of competition, such as technical levelling[15] or reduced participation due to undertakings’ interest in protecting business secrets (as discussed below). Overall, it is beyond doubt that excessive transparency in public procurement is self-defeating because it erodes or nullifies any benefits derived from the organisation of public tenders.

All these economic insights led the OECD to adopt a formal Recommendation to prompt its members to “assess the various features of their public procurement laws and practices and their impact on the likelihood of collusion between bidders. Members should strive for public procurement tenders at all levels of government that are designed to promote more effective competition and to reduce the risk of bid rigging while ensuring overall value for money”.[16] Thus the impact of increased procurement transparency on the likelihood of collusion and cartelisation in procurement markets, as well as the other potential negative impacts on the intensity or quality of competition, requires closer scrutiny and the competition implications of excessive transparency cannot simply be overseen in the name of anti-corruption goals.[17] Not least, because a large number of cartels discovered and prosecuted by competition authorities involve public procurement markets[18]—which demonstrates that the economic impact of such collusion-facilitative implications of full transparency is not trivial. 

Estimating the economic impact of cartels in public procurement is a difficult task.[19] However, generally accepted estimates always show that the negative economic effect is by no means negligible and that anticompetitive overcharges can easily reach 20% of contract value.[20] Thus, particularly in view of the Europe 2020 goal to ensure ‘the most efficient use of public funds’,[21] issues of excessive transparency in public procurement markets need to be addressed so as to avoid losses of efficiency derived from the abnormal operation of market forces due to procurement rules and practices.

This does not mean that transparency needs to be completely abandoned in the public procurement setting, but a more nuanced approach that accommodates competition concerns is necessary. As has been rightly stressed, “transparency measures should at least be limited to those needed in order to enhance competition and ensure integrity, rather than being promoted as a matter of principle. Transparency should be perceived as a means to an end, rather than a goal in itself”.[22] This is in line with the OECD’s specific recommendation that “[w]hen publishing the results of a tender, [contracting authorities] carefully consider which information is published and avoid disclosing competitively sensitive information as this can facilitate the formation of bid-rigging schemes, going forward”.[23] The final section of this non-paper presents some normative recommendations to that purpose, which highlight much needed restrictions to the promotion of full transparency as a matter of principle.

Are there other reasons why procurement registries can be problematic?
As briefly mentioned above, another source of possible negative impacts derived from public contract registries is their potential chilling effect on undertakings keen to protect their business secrets. It is often stressed that tenders contain sensitive information and that disclosure of that information can damage the commercial interests of bidders if those secrets are at risk of being disclosed through the public contracts registries or otherwise.[24] Thus, undertakings can either decide not to participate in particularly sensitive tenders, or submit offers and documentation in such a way as to keep their secrets concealed, hence diminishing their quality or increasing the information cost/asymmetry that the contracting authority needs to overcome in their assessment. Either way, these business secret protective strategies reduce the intensity and quality of the competition. Moreover, transparency of certain elements of human resources-related information (particularly in view of the increasing importance of work teams in the area of services procurement) not only can trigger data protection concerns,[25] but also facilitate unfair business practices such as the poaching of key employees.

However, despite the clear existence of business secret and commercial interest justifications for the preservation of certain levels of secrecy, there is a tendency to minimise the relevance of these issues by creating a private interest-public interest dichotomy and stressing the relevance of public (anti-corruption) goals. This is problematic. What is often overlooked is that contracting authorities have themselves a commercial interest in keeping business secrets protected. That interest derives immediately from their need to minimise the abovementioned chilling effect (ie not crowding out or scaring away undertakings wary of excessive disclosure), so that competition remains as strong as possible. And such interest in avoiding excessive disclosure also derives, in the mid to long-term, from the need not to thwart innovation by means of technical levelling or de facto standard setting.

These issues were recently well put in the context of UK litigation concerning a freedom of information request that the contracting authority rejected on the basis of relevant business secret and commercial interest protection. As clarified by the First Tier Tribunal,
There is a public interest in maintaining an efficient competitive market for leisure management systems. If the commercial secrets of one market entity were revealed, its competitive position would be eroded and the whole market would be less competitive. As the Court of Appeal put it in Veolia ES Nottinghamshire Ltd v Nottinghamshire County Council and others [2012] P.T.S.R. 185 at [111], a company’s confidential information is often “the life blood of an enterprise”. The [Information Commissioner’s Office] argued that this is particularly so in an industry such as the provision of leisure management systems because such systems are a complex amalgam of technologies, customer support networks, and user interfaces, which involve elements individual to particular companies. Those individual elements drive competition to the benefit of public authorities and consumers.[26]
Thus, the protection of business secrets and commercial interests should not be seen as a limitation of the public (anti-corruption) interest in the benefit of private interests, but as a balancing exercise between two competing public interest goals: efficiency and integrity of procurement. Once this realignment of goals is understood, restrictions of public procurement transparency based on competition considerations should receive support also from a public governance perspective.

A final consideration in terms of potential negative impacts of public contract registries derives from the way they are financed. At least in the case of Italy, economic operators are required to pay fees towards the funding of the relevant public contract registry when they first participate in any given tender. This becomes a financial burden linked to procurement participation that can have clear chilling effects, particularly for SMEs with limited financial resources. It is widely accepted that financial barriers to participation should be suppressed as a matter of best practice[27]—and, in certain occasions, as a matter of compliance with internal market regulation as well. Thus, the creation of any sort of public contract registry which funding requires upfront payments from interested undertakings should not be favoured.

How could competition and confidentiality concerns be embedded in the design of public contract registries, so that their risks are minimised?

The discussion above supports a nuanced approach to the level of transparency actually created by public contract registries, which needs to fall short of the full transparency paradigm in which they have been conceived and started to be implemented. As a functional criterion, only the information that is necessary to ensure proper oversight and the effectiveness of anti-corruption measures should be disclosed, whereas the information that can be most damaging for competition should be withheld. 

Generally, what is needed is more granularity in the levels of information that are made accessible to different stakeholders. The current full transparency approach whereby all information is made available to everyone falls very short from the desired balance between transparency and competition goals of public procurement. A system based on enabling or targeted transparency, whereby each stakeholder gets access to the information it needs for a specific purpose, is clearly preferable.

In more specific terms, the following normative recommendations are subjected to further discussion. They are by no means exhaustive and simply aim to specify the sort of nuanced approach to disclosure of public procurement information that is hereby advocated.

  • Public contract registers should not be fully available to the public. Access to the full registry should be restricted to public sector officials under a strong duty of confidentiality protected by appropriate sanctions in cases of illegitimate disclosure.
  • Even within the public sector, access to the full register should be made available on a need to know basis. Oversight entities, such as the audit court or the competition authority, should have full access. However, other entities or specific civil servants should only access the information they require to carry out their functions.
  • Limited versions of the public contract registry that are made accessible to the public should aggregate information by contracting authority and avoid disclosing any particulars that could be traced back to specific tenders or specific undertakings.
  • Representative institutions, such as third sector organisations, or academics should have the opportunity of seeking access to the full registry on a case by case basis where they can justify a legitimate or research-related interest. In case of access, ethical approval shall be obtained, anonymization of data attempted, and specific confidentiality requirements duly imposed.
  • Delayed access to the full public registry could also be allowed for, provided there are sufficient safeguards to ensure that historic information does not remain relevant for the purposes of protecting market competition, business secrets and commercial interests.
  • Tenderers should have access to their own records, even if they are not publicly-available, so as to enable them to check their accuracy. This is particularly relevant if public contract registries are used for the purposes of assessing past performance under the new rules.
  • Big data should be published on an anonymised basis, so that general trends can be analysed without enabling ‘reverse engineering’ of information that can be traced to specific bidders.
  • The entity in charge of the public contracts registry should regularly publish aggregated statistics by type of procurement procedure, object of contract, or any other items deemed relevant for the purposes of public accountability of public buyers (such as percentages of expenditure in green procurement, etc).
  • The entity in charge of the public contracts registry should develop a system of red flag indicators and monitor them with a view to reporting instances of potential collusion to the relevant competition authority.


[1] Senior Lecturer in Law, University of Bristol Law School and Member of the European Commission Stakeholder Expert Group on Public Procurement (E02807) (2015-2018). This paper has been prepared for discussion within the Expert Group, following an initial exchange of ideas in the meeting held in Brussels on 14 September 2015. The views presented on this paper are my own and in no way bind any of the abovementioned institutions. Comments and suggestions welcome: a.sanchez-graells@bristol.ac.uk.
[2] For discussion of this regulatory and governance approach in the area of public procurement, see C Harlow and R Rawlings, Process and Procedure in EU Administration (Oxford, Hart, 2014) 142-169.
[3] Point 2 ‘’contract registers to enhance full transparency of data related to public procurement”, included in the agenda for the Stakeholder Expert Group on Public Procurement of 14 September 2015, available at http://ec.europa.eu/internal_market/publicprocurement/docs/expert-group/150914-agenda_en.pdf.
[4] Tenders Electronic Daily (TED) http://ted.europa.eu/TED/main/HomePage.do.
[6] Base: Contratos Publicos Online, http://www.base.gov.pt/Base/pt/Homepage.
[7] Banca Dati Nazionale dei Contratti pubblici, http://portaletrasparenza.avcp.it/microstrategy/html/index.htm.
[8] A case study based on the Slovakian Online Central Register of Contracts is available at https://joinup.ec.europa.eu/community/epractice/case/slovakian-online-central-register-contracts.
[9] See eg the efforts of the Sunlight Foundation by means of its Procurement Open Data Guidelines http://sunlightfoundation.com/procurement/opendataguidelines. See also the Open Contracting Data Standard project http://standard.open-contracting.org/.
[10] For discussion, see A Sanchez-Graells, “The Difficult Balance between Transparency and Competition in Public Procurement: Some Recent Trends in the Case Law of the European Courts and a Look at the New Directives” (November 2013), http://ssrn.com/abstract=2353005.
[11] A Sanchez-Graells, Public Procurement and the EU Competition Rules, 2nd edn (Oxford, Hart, 2015) 73-75.
[12] OECD, Public Procurement: Role of Competition Authorities (2007) 7, available at http://www.oecd.org/competition/cartels/39891049.pdf. For discussion, see A Sanchez-Graells, “Prevention and Deterrence of Bid Rigging: A Look from the New EU Directive on Public Procurement”, in G Racca & C Yukins (eds), Integrity and Efficiency in Sustainable Public Contracts (Brussels, Bruylant, 2014) 171-198, available at http://ssrn.com/abstract=2053414.
[13] For discussion, see A Heimler, “Cartels in Public Procurement” (2012) 8(4) Journal of Competition Law & Economics 849-862 and SE Weishaar, Cartels, Competition and Public Procurement. Law and Economics Approaches to Bid Rigging (Cheltenham, Edward Elgar, 2013) 28-36.
[14] P Gugler, “Transparency and Competition Policy in an Imperfectly Competitive World”, in J Forssbaeck & L Oxelheim (eds), Oxford Handbook of Economic and Institutional Transparency (Oxford, OUP, 2014) 144, 150.
[15] Sanchez-Graells, Public Procurement and the EU Competition Rules (n 11) 76.
[16] OECD, Recommendation on Fighting Bid Rigging in Public Procurement (2012), available at http://www.oecd.org/daf/competition/RecommendationOnFightingBidRigging2012.pdf. For discussion, see A Sanchez-Graells, “Public Procurement and Competition: Some Challenges Arising from Recent Developments in EU Public Procurement Law”, in C Bovis (ed), Research Handbook on European Public Procurement (Cheltenham, Elgar, 2016). Available at http://ssrn.com/abstract=2206502.
[17] For discussion, see RD Anderson, WE Kovacic and AC Muller, ‘Ensuring integrity and competition in public procurement markets: a dual challenge for good governance’ in S Arrowsmith & RD Anderson (eds), The WTO Regime on Government Procurement: Challenge and Reform (CUP, 2011) 681-718.
[18] This is true in all jurisdictions. See KL Haberbush, “Limiting the Government’s Exposure to Bid Rigging Schemes: A Critical Look at the Sealed Bidding Regime” (2000–2001) 30 Public Contract Law Journal 97, 98; and RD Anderson & WE Kovacic, ‘Competition Policy and International Trade Liberalisation: Essential Complements to Ensure Good Performance in Public Procurement Markets’ (2009) 18 Public Procurement Law Review 67. See also A Sanchez-Graells, “Public Procurement: A 2014 Updated Overview of EU and National Case Law” (2014). e-Competitions: National Competition Laws Bulletin, No. 40647. Available at http://ssrn.com/abstract=1968371.
[19] See the debate around the proposal to create a rebuttable presumption of overcharge at 20% in the Directive on actions for breach of the EU antitrust rules; Commission Staff Working Document SWD(2013) 203 final para 88, http://ec.europa.eu/competition/antitrust/actionsdamages/impact_assessment_en.pdf. However, given the controversy on specific figures, the final version of Art 17 of Directive 2014/104 includes an unquantified presumption. Directive 2014/104/EU of the European Parliament and of the Council of 26 November 2014 on certain rules governing actions for damages under national law for infringements of the competition law provisions of the Member States and of the European Union [2014] OJ L 349/1.
[20] For a very modest estimation of cartel overcharges in the environment of 17%, see M Boyer & R Kotchoni, “How Much Do Cartels Overcharge?” (2014) Toulouse School of Economics Working Paper TSE‐462, available at http://www.tse-fr.eu/sites/default/files/medias/doc/wp/etrie/wp_tse_462_v2.pdf.
[21] Communication from the Commission of 3 March 2010, Europe 2020 A strategy for smart, sustainable and inclusive growth, COM (2010) 2020 final para 4.3, p. 24, available at http://ec.europa.eu/eu2020/pdf/COMPLET%20EN%20BARROSO%20%20%20007%20-%20Europe%202020%20-%20EN%20version.pdf. For discussion, see A Sanchez-Graells, “Truly competitive public procurement as a Europe 2020 lever: what role for the principle of competition in moderating horizontal policies?” (2016) 22(2) European Public Law Journal, available at http://ssrn.com/abstract=2638466.
[22] RD Anderson and AC Muller, “Promoting Competition and Deterring Corruption in Public Procurement markets: Synergies with Trade Liberalization”, draft paper to be published in the "E15 Expert Group on Competition Policy" (a joint initiative/facility of the World Economic Forum and the International Centre for Trade and Sustainable Development) 13 (on file with author).
[23] OECD, Guidelines for Fighting Bid Rigging in Public Procurement (2009) 7, available at http://www.oecd.org/competition/cartels/42851044.pdf.
[24] For discussion, see C Ginter, N Parrest & M-A Simovart, “Requirement to Protect Business Secrets and Disclose Procurement Contracts under Procurement Law” (2013) IX Juridica 658-665.
[25] These are beyond the scope of this discussion non-paper.
[26] Sally Ballan v Information Commissioner EA/2015/0021 (28 July 2015) para [25(c)], available at http://www.informationtribunal.gov.uk/DBFiles/Decision/i1609/Ballan,%20Sally%20EA.2015.0021%20%2828.07.15%29.pdf.
[27] Sanchez-Graells, Public Procurement and the EU Competition Rules (n 11) 280-281.

Más paños calientes desde el CGPJ: ¿para cuándo un sistema judicial serio, donde las incompatibilidades se hagan respetar?

En los últimos días se ha destapado otro de los ya innumerables escándalos en el incorrecto funcionamiento de nuestro sistema judicial y el incumplimiento (aparentemente generalizado o, cuanto menos, no excepcional) de la normativa deontológica (y penal) reguladora de la conducta de jueces y magistrados. No me refiero al escándalo de los gastos del Presidente del Tribunal Supremo--que ya es de por sí especialmente preocupante, sobre todo porque aún no ha dimitido pese a dar a entender que efectivamente sufragó gastos excesivos y de carácter personal con fondos públicos (por más que, en su opinión, se trate de una "miseria": http://tinyurl.com/miseriadedivar). 

Sino que me quiero centrar en un escándalo que parece crecer como un cáncer (aunque está en fases incipientes y, por tanto, se puede atajar), que es el de la organización o participación de magistrados en cursos de formación de abogados--a los que algunos aún se permiten el lujo de beneficiar posteriormente, por ejemplo, nombrándoles administradores concursales (véase, un caso quizá un poco extremo, en http://tinyurl.com/magistradoincompatible, en el que "este magistrado organizaba cursos sobre derecho concursal sin solicitar permiso para ello y llegó a nombrar como administradores concursales en asuntos pendientes de su juzgado a algunos de sus alumnos").

Se trata de una práctica relativamente extendida--la de participación de magistrados en cursos por los que perciben importantes remuneraciones (por encima de las percibidas por académicos y otros profesionales de la formación jurídica de postgrado) y en los que interactúan con abogados con los que pueden tener asuntos en común en ese momento o poco después--que, desgraciadamente, no está recibiendo una respuesta efectiva por el Consejo General del Poder Judicial, ni siquiera en casos de claro abuso.

En el caso concreto del magistrado que nombró administradores concursales a los alumnos del curso que había organizado, el CGPJ se ha limitado a trasladarle a un nuevo puesto a más de 100 kilómetros de su actual puesto, "adjudic[ando] a este magistrado la vacante existente en el Juzgado de Primera Instancia e Instrucción número 2 de Talavera de la Reina sin posibilidad de obtener otro destino mediante concurso en el plazo de un año a contar desde la fecha en que tome posesión de esta plaza".

Honestamente, me parece una sanción absolutamente ridícula para la gravedad de la conducta de conflicto de interés en que ha actuado este magistrado, que "acudió durante varias sesiones a unas jornadas [de formación] durante su horario de permanencia en el juzgado, sin pedir ni obtener licencia para ello". De la remuneración adicional y de posibles implicaciones (de potencial prevaricación) en el nombramiento de administradores concursales o en los casos en que actuaba el magistrado, ni una palabra por el CGPJ. 

Una vez más, o presionamos por tener un sistema judicial realmente limpio (extirpando lo que sea necesario para que el cáncer no se lo coma vivo, o lo que queda de él--para salvar a la mayor parte de las manzanas, que no están podridas, pero sí rodeadas de podredumbre), o seguiremos siendo el país de pandereta por el que merecemos que nos tomen.