Some further thoughts on setting procurement up to fail in 'AI regulation by contract'

The next bit of my reseach project concerns the leveraging of procurement to achieve ‘AI regulation by contract’ (ie to ensure in the use of AI by the public sector: trustworthiness, safety, explainability, human rights compliance, legality especially in data protection terms, ethical use, etc), so I have been thinking about it for the last few weeks to build on my previous views (see here).

In this post, I summarise my further thoughts — which have been prompted by the rich submissions to the House of Commons Science and Technology Committee [ongoing] inquiry on the ‘Governance of Artificial Intelligence’.

Let’s do it via procurement

As a starting point, it is worth stressing that the (perhaps unsurprising) increasingly generalised position is that procurement has a key role to play in regulating the adoption of digital technologies (and AI in particular) by the public sector—which consolidates procurement’s gatekeeping role in this regulatory space (see here).

More precisely, the generalised view is not that procurement ought to play such a role, but that it can do so (effectively and meaningfully). ‘AI regulation by contract’ via procurement is seen as an (easily?) actionable policy and governance mechanism despite the more generalised reluctance and difficulties in regulating AI through general legislative and policy measures, and in creating adequate governance architectures (more below).

This is very clear in several submissions to the ongoing Parliamentary inquiry (above). Without seeking to be exhaustive (I have read most, but not all submissions yet), the following points have been made in written submissions (liberally grouped by topics):

Procurement as (soft) AI regulation by contract & ‘Market leadership’

  • Procurement processes can act as a form of soft regulation Government should use its purchasing power in the market to set procurement requirements that ensure private companies developing AI for the public sector address public standards. ’ (Committee on Standards in Public Life, at [25]-[26], emphasis added).

  • For public sector AI projects, two specific strategies could be adopted [to regulate AI use]. The first … is the use of strategic procurement. This approach utilises government funding to drive change in how AI is built and implemented, which can lead to positive spill-over effects in the industry’ (Oxford Internet Institute, at 5, emphasis added).

  • Responsible AI Licences (“RAILs”) utilise the well-established mechanisms of software and technology licensing to promote self-governance within the AI sector. RAILs allow developers, researchers, and companies to publish AI innovations while specifying restrictions on the use of source code, data, and models. These restrictions can refer to high-level restrictions (e.g., prohibiting uses that would discriminate against any individual) as well as application-specific restrictions (e.g., prohibiting the use of a facial recognition system without consent) … The adoption of such licenses for AI systems funded by public procurement and publicly-funded AI research will help support a pro-innovation culture that acknowledges the unique governance challenges posed by emerging AI technologies’ (Trustworthy Autonomous Systems Hub, at 4, emphasis added).

Procurement and AI explainability

  • public bodies will need to consider explainability in the early stages of AI design and development, and during the procurement process, where requirements for transparency could be stipulated in tenders and contracts’ (Committee on Standards in Public Life, at [17], emphasis added).

  • In the absence of strong regulations, the public sector may use strategic procurement to promote equitable and transparent AI … mandating various criteria in procurement announcements and specifying design criteria, including explainability and interpretability requirements. In addition, clear documentation on the function of a proposed AI system, the data used and an explanation of how it works can help. Beyond this, an approved vendor list for AI procurement in the public sector is useful, to which vendors that agree to meet the defined transparency and explainability requirements may be added’ (Oxford Internet Institute, at 2, referring to K McBride et al (2021) ‘Towards a Systematic Understanding on the Challenges of Procuring Artificial Intelligence in the Public Sector’, emphasis added).

Procurement and AI ethics

  • For example, procurement processes should be designed so products and services that facilitate high standards are preferred and companies that prioritise ethical practices are rewarded. As part of the commissioning process, the government should set out the ethical principles expected of companies providing AI services to the public sector. Adherence to ethical standards should be given an appropriate weighting as part of the evaluation process, and companies that show a commitment to them should be scored more highly than those that do not (Committee on Standards in Public Life, at [26], emphasis added).

Procurement and algorithmic transparency

  • … unlike public bodies, the private sector is not bound by the same safeguards – such as the Public Sector Equality Duty within the Equality Act 2010 (EA) – and is able to shield itself from criticisms regarding transparency behind the veil of ‘commercial sensitivity’. In addition to considering the private company’s purpose, AI governance itself must cover the private as well as public sphere, and be regulated to the same, if not a higher standard. This could include strict procurement rules – for example that private companies need to release certain information to the end user/public, and independent auditing of AI systems’ (Liberty, at [20]).

  • … it is important that public sector agencies are duly empowered to inspect the technologies they’re procuring and are not prevented from doing so by the intellectual property rights. Public sector buyers should use their purchasing power to demand access to suppliers’ systems to test and prove their claims about, for example, accuracy and bias’ (BILETA, at 6).

Procurement and technical standards

  • Standards hold an important role in any potential regulatory regime for AI. Standards have the potential to improve transparency and explainability of AI systems to detail data provenance and improve procurement requirements’ (Ada Lovelace Institute, at 10)

  • The speed at which the technology can develop poses a challenge as it is often faster than the development of both regulation and standards. Few mature standards for autonomous systems exist and adoption of emerging standards need to be encouraged through mechanisms such as regulation and procurement, for example by including the requirement to meet certain standards in procurement specification’ (Royal Academy of Engineering, at 8).

Can procurement do it, though?

Implicit in most views about the possibility of using procurement to regulate public sector AI adoption (and to generate broader spillover effects through market-based propagation mechanisms) is an assumption that the public buyer does (or can get to) know and can (fully, or sufficiently) specify the required standards of explainability, transparency, ethical governance, and a myriad other technical requirements (on auditability, documentation, etc) for the use of AI to be in the public interest and fully legally compliant. Or, relatedly, that such standards can (and will) be developed and readily available for the public buyer to effectively refer to and incorporate them into its public contracts.

This is a BIG implicit assumption, at least in relation with non trivial/open-ended proceduralised requirements and in relation to most of the complex issues raised by (advanced) forms of AI deployment. A sobering and persuasive analysis has shown that, at least for some forms of AI (based on neural networks), ‘it appears unlikely that anyone will be able to develop standards to guide development and testing that give us sufficient confidence in the applications’ respect for health and fundamental rights. We can throw risk management systems, monitoring guidelines, and documentation requirements around all we like, but it will not change that simple fact. It may even risk giving us a false sense of confidence’ [H Pouget, ‘The EU’s AI Act Is Barreling Toward AI Standards That Do Not Exist’ (Lawfare.com, 12 Jan 2023)].

Even for less complex AI deployments, the development of standards will be contested and protracted. This not only creates a transient regulatory gap that forces public buyers to ‘figure it out’ by themselves in the meantime, but can well result in a permanent regulatory gap that leaves procurement as the only safeguard (on paper) in the process of AI adoption in the public sector. If more general and specialised processes of standard setting are unlikely to plug that gap quickly or ever, how can public buyers be expected to do otherwise?

seriously, can procurement do it?

Further, as I wrote in my own submission to the Parliamentary inquiry, ‘to effectively regulate by contract, it is at least necessary to have (i) clarity on the content of the obligations to be imposed, (ii) effective enforcement mechanisms, and (iii) public sector capacity to establish, monitor, and enforce those obligations. Given that the aim of regulation by contract would be to ensure that the public sector only adopts trustworthy AI solutions and deploys them in a way that promotes the public interest in compliance with existing standards of protection of fundamental and individual rights, exercising the expected gatekeeping role in this context requires a level of legal, ethical, and digital capability well beyond the requirements of earlier instances of regulation by contract to eg enforce labour standards’ (at [4]).

Even optimistically ignoring the issues above and adopting the presumption that standards will emerge or the public buyer will be able to (eventually) figure it out (so we park requirement (i) for now), and also assuming that the public sector will be able to develop the required level of eg digital capability (so we also park (iii), but see here)), does however not overcome other obstacles to leveraging procurement for ‘AI regulation by contract’. In particular, it does not address the issue of whether there can be effective enforcement mechanisms within the contractual relationship resulting from a procurement process to impose compliance with the required standards (of explainability, transparency, ethical use, non-discrimination, etc).

I approach this issue as the challenge of enforcing not entirely measurable contractual obligations (ie obligations to comply with a contractual standard rather than a contractual rule), and the closest parallel that comes to my mind is the issue of enforcing quality requirements in public contracts, especially in the provision of outsourced or contracted-out public services. This is an issue on which there is a rich literature (on ‘regulation by contract’ or ‘government by contract’).

Quality-related enforcement problems relate to the difficulty of using contract law remedies to address quality shortcomings (other than perhaps price reductions or contractual penalties where those are permissible) that can do little to address the quality issues in themselves. Major quality shortcomings could lead to eg contractual termination, but replacing contractors can be costly and difficult (especially in a technological setting affected by several sources of potential vendor and technology lock in). Other mechanisms, such as leveraging past performance evaluations to eg bar access to future procurements can also do too little too late to control quality within a specific contract.

An illuminating analysis of the ‘problem of quality’ concluded that the ‘structural problem here is that reliable assurance of quality in performance depends ultimately not on contract terms but on trust and non-legal relations. Relations of trust and powerful non-legal sanctions depend upon the establishment of long-term … relations … The need for a governance structure and detailed monitoring in order to achieve co-operation and quality seems to lead towards the creation of conflictual relations between government and external contractors’ [see H Collins, Regulating Contracts (OUP 1999) 314-15].

To me, this raises important questions about the extent to which procurement and public contracts more generally can effectively deliver the expected safeguards and operate as an adequate sytem of ‘AI regulation by contract’. It seems to me that price clawbacks or financial penalties, even debarment decisions, are unilkely to provide an acceptable safety net in some (or most) cases — eg high-risk uses of complex AI. Not least because procurement disputes can take a long time to settle and because the incentives will not always be there to ensure strict enforcement anyway.

More thoughts to come

It seems increasingly clear to me that the expectations around the leveraging of procurement to ‘regulate AI by contract’ need reassessing in view of its likely effectiveness. Such effectiveness is constrained by the rules on the design of tenders for the award of public contracts, as well as those public contracts, and mechanisms to resolve disputes emerging from either tenders or contracts. The effectiveness of this approach is, of course, also constrained by public sector (digital) capability and by the broader difficulties in ascertaining the appropriate approach to (standards-based) AI regulation, which cannot so easily be set aside. I will keep thinking about all this in the process of writing my monograph. If this is of interested, keep an eye on this blog fior further thougths and analysis.

AI regulation by contract: submission to UK Parliament

In October 2022, the Science and Technology Committee of the House of Commons of the UK Parliament (STC Committee) launched an inquiry on the ‘Governance of Artificial Intelligence’. This inquiry follows the publication in July 2022 of the policy paper ‘Establishing a pro-innovation approach to regulating AI’, which outlined the UK Government’s plans for light-touch AI regulation. The inquiry seeks to examine the effectiveness of current AI governance in the UK, and the Government’s proposals that are expected to follow the policy paper and provide more detail. The STC Committee has published 98 pieces of written evidence, including submissions from UK regulators and academics that will make for interesting reading. Below is my submission, focusing on the UK’s approach to ‘AI regulation by contract’.

A. Introduction

01. This submission addresses two of the questions formulated by the House of Commons Science and Technology Committee in its inquiry on the ‘Governance of artificial intelligence (AI)’. In particular:

  • How should the use of AI be regulated, and which body or bodies should provide regulatory oversight?

  • To what extent is the legal framework for the use of AI, especially in making decisions, fit for purpose?

    • Is more legislation or better guidance required?

02. This submission focuses on the process of AI adoption in the public sector and, particularly, on the acquisition of AI solutions. It evidences how the UK is consolidating an inadequate approach to ‘AI regulation by contract’ through public procurement. Given the level of abstraction and generality of the current guidelines for AI procurement, major gaps in public sector digital capabilities, and potential structural conflicts of interest, procurement is currently an inadequate tool to govern the process of AI adoption in the public sector. Flanking initiatives, such as the pilot algorithmic transparency standard, are unable to address and mitigate governance risks. Contrary to the approach in the AI Regulation Policy Paper,[1] plugging the regulatory gap will require (i) new legislation supported by a new mechanism of external oversight and enforcement (an ‘AI in the Public Sector Authority’ (AIPSA)); (ii) a well-funded strategy to boost in-house public sector digital capabilities; and (iii) the introduction of a (temporary) mechanism of authorisation of AI deployment in the public sector. The Procurement Bill would not suffice to address the governance shortcomings identified in this submission.

B. ‘AI Regulation by Contract’ through Procurement

03. Unless the public sector develops AI solutions in-house, which is extremely rare, the adoption of AI technologies in the public sector requires a procurement procedure leading to their acquisition. This places procurement at the frontline of AI governance because the ‘rules governing the acquisition of algorithmic systems by governments and public agencies are an important point of intervention in ensuring their accountable use’.[2] In that vein, the Committee on Standards in Public Life stressed that the ‘Government should use its purchasing power in the market to set procurement requirements that ensure that private companies developing AI solutions for the public sector appropriately address public standards. This should be achieved by ensuring provisions for ethical standards are considered early in the procurement process and explicitly written into tenders and contractual arrangements’.[3] Procurement is thus erected as a public interest gatekeeper in the process of adoption of AI by the public sector.

04. However, to effectively regulate by contract, it is at least necessary to have (i) clarity on the content of the obligations to be imposed, (ii) effective enforcement mechanisms, and (iii) public sector capacity to establish, monitor, and enforce those obligations. Given that the aim of regulation by contract would be to ensure that the public sector only adopts trustworthy AI solutions and deploys them in a way that promotes the public interest in compliance with existing standards of protection of fundamental and individual rights, exercising the expected gatekeeping role in this context requires a level of legal, ethical, and digital capability well beyond the requirements of earlier instances of regulation by contract to eg enforce labour standards.

05. On a superficial reading, it could seem that the National AI Strategy tackled this by highlighting the importance of the public sector’s role as a buyer and stressing that the Government had already taken steps ‘to inform and empower buyers in the public sector, helping them to evaluate suppliers, then confidently and responsibly procure AI technologies for the benefit of citizens’.[4] The National AI Strategy referred, in particular, to the setting up of the Crown Commercial Service’s AI procurement framework (the ‘CCS AI Framework’),[5] and the adoption of the Guidelines for AI procurement (the ‘Guidelines’)[6] as enabling tools. However, a close look at these instruments will show their inadequacy to provide clarity on the content of procedural and contractual obligations aimed at ensuring the goals stated above (para 03), as well as their potential to widen the existing public sector digital capability gap. Ultimately, they do not enable procurement to carry out the expected gatekeeping role.

C. Guidelines and Framework for AI procurement

06. Despite setting out to ‘provide a set of guiding principles on how to buy AI technology, as well as insights on tackling challenges that may arise during procurement’, the Guidelines provide high-level recommendations that cannot be directly operationalised by inexperienced public buyers and/or those with limited digital capabilities. For example, the recommendation to ‘Try to address flaws and potential bias within your data before you go to market and/or have a plan for dealing with data issues if you cannot rectify them yourself’ (guideline 3) not only requires a thorough understanding of eg the Data Ethics Framework[7] and the Guide to using Artificial Intelligence in the public sector,[8] but also detailed insights on data hazards.[9] This leads the Guidelines to stress that it may be necessary ‘to seek out specific expertise to support this; data architects and data scientists should lead this process … to understand the complexities, completeness and limitations of the data … available’.

07. Relatedly, some of the recommendations are very open ended in areas without clear standards. For example, the effectiveness of the recommendation to ‘Conduct initial AI impact assessments at the start of the procurement process, and ensure that your interim findings inform the procurement. Be sure to revisit the assessments at key decision points’ (guideline 4) is dependent on the robustness of such impact assessments. However, the Guidelines provide no further detail on how to carry out such assessments, other than a list of some generic areas for consideration (eg ‘potential unintended consequences’) and a passing reference to emerging guidelines in other jurisdictions. This is problematic, as the development of algorithmic impact assessments is still at an experimental stage,[10] and emerging evidence shows vastly diverging approaches, eg to risk identification.[11] In the absence of clear standards, algorithmic impact assessments will lead to inconsistent approaches and varying levels of robustness. The absence of standards will also require access to specialist expertise to design and carry out the assessments.

08. Ultimately, understanding and operationalising the Guidelines requires advanced digital competency, including in areas where best practices and industry standards are still developing.[12] However, most procurement organisations lack such expertise, as a reflection of broader digital skills shortages across the public sector,[13] with recent reports placing civil service vacancies for data and tech roles throughout the civil service alone close to 4,000.[14] This not only reduces the practical value of the Guidelines to facilitate responsible AI procurement by inexperienced buyers with limited capabilities, but also highlights the role of the CCS AI Framework for AI adoption in the public sector.

09. The CCS AI Framework creates a procurement vehicle[15] to facilitate public buyers’ access to digital capabilities. CCS’ description for public buyers stresses that ‘If you are new to AI you will be able to procure services through a discovery phase, to get an understanding of AI and how it can benefit your organisation.’[16] The Framework thus seeks to enable contracting authorities, especially those lacking in-house expertise, to carry out AI procurement with the support of external providers. While this can foster the uptake of AI in the public sector in the short term, it is highly unlikely to result in adequate governance of AI procurement, as this approach focuses at most on the initial stages of AI adoption but can hardly be sustainable throughout the lifecycle of AI use in the public sector—and, crucially, would leave the enforcement of contractualised AI governance obligations in a particularly weak position (thus failing to meet the enforcement requirement at para 04). Moreover, it would generate a series of governance shortcomings which avoidance requires an alternative approach.

D. Governance Shortcomings

10. Despite claims to the contrary in the National AI Strategy (above para 05), the approach currently followed by the Government does not empower public buyers to responsibly procure AI. The Guidelines are not susceptible of operationalisation by inexperienced public buyers with limited digital capabilities (above paras 06-08). At the same time, the Guidelines are too generic to support sophisticated approaches by more advanced digital buyers. The Guidelines do not reduce the uncertainty and complexity of procuring AI and do not include any guidance on eg how to design public contracts to perform the regulatory functions expected under the ‘AI regulation by contract’ approach.[17] This is despite existing recommendations on eg the development of ‘model contracts and framework agreements for public sector procurement to incorporate a set of minimum standards around ethical use of AI, with particular focus on expected levels transparency and explainability, and ongoing testing for fairness’.[18] The guidelines thus fail to address the first requirement for effective regulation by contract in relation to clarifying the relevant obligations (para 04).

11. The CCS Framework would also fail to ensure the development of public sector capacity to establish, monitor, and enforce AI governance obligations (para 04). Perhaps counterintuitively, the CCS AI Framework can generate a further disempowerment of public buyers seeking to rely on external capabilities to support AI adoption. There is evidence that reliance on outside providers and consultants to cover immediate needs further erodes public sector capability in the long term,[19] as well as creating risks of technical and intellectual debt in the deployment of AI solutions as consultants come and go and there is no capture of institutional knowledge and memory.[20] This can also exacerbate current trends of pilot AI graveyard spirals, where most projects do not reach full deployment, at least in part due to insufficient digital capabilities beyond the (outsourced) pilot phase. This tends to result in self-reinforcing institutional weaknesses that can limit the public sector’s ability to drive digitalisation, not least because technical debt quickly becomes a significant barrier.[21] It also runs counter to best practices towards building public sector digital maturity,[22] and to the growing consensus that public sector digitalisation first and foremost requires a prioritised investment in building up in-house capabilities.[23] On this point, it is important to note the large size of the CCS AI Framework, which was initially pre-advertised with a £90 mn value,[24] but this was then revised to £200 mn over 42 months.[25] Procuring AI consultancy services under the Framework can thus facilitate the funnelling of significant amounts of public funds to the private sector, rather than using those funds to build in-house capabilities. It can result in multiple public buyers entering contracts for the same expertise, which thus duplicates costs, as well as in a cumulative lack of institutional learning by the public sector because of atomised and uncoordinated contractual relationships.

12. Beyond the issue of institutional dependency on external capabilities, the cumulative effect of the Guidelines and the Framework would be to outsource the role of ‘AI regulation by contract’ to unaccountable private providers that can then introduce their own biases on the substantive and procedural obligations to be embedded in the relevant contracts—which would ultimately negate the effectiveness of the regulatory approach as a public interest safeguard. The lack of accountability of external providers would not only result from the weakness (or absolute inability) of the public buyer to control their activities and challenge important decisions—eg on data governance, or algorithmic impact assessments, as above (paras 06-07)—but also from the potential absence of effective and timely external checks. Market mechanisms are unlikely to deliver adequate checks due market concentration and structural conflicts of interest affecting both providers that sometimes provide consultancy services and other times are involved in the development and deployment of AI solutions,[26] as well as a result of insufficiently effective safeguards on conflicts of interest resulting from quickly revolving doors. Equally, broader governance controls are unlikely to be facilitated by flanking initiatives, such as the pilot algorithmic transparency standard.

13. To try to foster accountability in the adoption of AI by the public sector, the UK is currently piloting an algorithmic transparency standard.[27] While the initial six examples of algorithmic disclosures published by the Government provide some details on emerging AI use cases and the data and types of algorithms used by publishing organisations, and while this information could in principle foster accountability, there are two primary shortcomings. First, completing the documentation requires resources and, in some respects, advanced digital capabilities. Organisations participating in the pilot are being supported by the Government, which makes it difficult to assess to what extent public buyers would generally be able to adequately prepare the documentation on their own. Moreover, the documentation also refers to some underlying requirements, such as algorithmic impact assessments, that are not yet standardised (para 07). In that, the pilot standard replicates the same shortcomings discussed above in relation to the Guidelines. Algorithmic disclosure will thus only be done by entities with high capabilities, or it will be outsourced to consultants (thus reducing the scope for the revelation of governance-relevant information).

14. Second, compliance with the standard is not mandatory—at least while the pilot is developed. If compliance with the algorithmic transparency standard remains voluntary, there are clear governance risks. It is easy to see how precisely the most problematic uses may not be the object of adequate disclosures under a voluntary self-reporting mechanism. More generally, even if the standard was made mandatory, it would be necessary to implement an external quality control mechanism to mitigate problems with the quality of self-reported disclosures that are pervasive in other areas of information-based governance.[28] Whether the Central Digital and Data Office (currently in charge of the pilot) would have capacity (and powers) to do so remains unclear, and it would in any case lack independence.

15. Finally, it should be stressed that the current approach to transparency disclosure following the adoption of AI (ex post) can be problematic where the implementation of the AI is difficult to undo and/or the effects of malicious or risky AI are high stakes or impossible to revert. It is also problematic in that the current approach places the burden of scrutiny and accountability outside the public sector, rather than establishing internal, preventative (ex ante) controls on the deployment of AI technologies that could potentially be very harmful for fundamental and individual socio-economic rights—as evidenced by the inclusion of some fields of application of AI in the public sector as ‘high risk’ in the EU’s proposed EU AI Act.[29] Given the particular risks that AI deployment in the public sector poses to fundamental and individual rights, the minimalistic and reactive approach outlined in the AI Regulation Policy Paper is inadequate.

E. Conclusion: An Alternative Approach

16. Ensuring that the adoption of AI in the public sector operates in the public interest and for the benefit of all citizens will require new legislation supported by a new mechanism of external oversight and enforcement. New legislation is required to impose specific minimum requirements of eg data governance and algorithmic impact assessment and related transparency across the public sector. Such legislation would then need to be developed in statutory guidance of a much more detailed and actionable nature than the current Guidelines. These developed requirements can then be embedded into public contracts by reference. Without such clarification of the relevant substantive obligations, the approach to ‘AI regulation by contract’ can hardly be effective other than in exceptional cases.

17. Legislation would also be necessary to create an independent authority—eg an ‘AI in the Public Sector Authority’ (AIPSA)—with powers to enforce those minimum requirements across the public sector. AIPSA is necessary, as oversight of the use of AI in the public sector does not currently fall within the scope of any specific sectoral regulator and the general regulators (such as the Information Commissioner’s Office) lack procurement-specific knowledge. Moreover, units within Cabinet Office (such as the Office for AI or the Central Digital and Data Office) lack the required independence.

18. It would also be necessary to develop a clear and sustainably funded strategy to build in-house capability in the public sector, including clear policies on the minimisation of expenditure directed at the engagement of external consultants and the development of guidance on how to ensure the capture and retention of the knowledge developed within outsourced projects (including, but not only, through detailed technical documentation).

19. Until sufficient in-house capability is built to ensure adequate understanding and ability to manage digital procurement governance requirements independently, the current reactive approach should be abandoned, and AIPSA should have to approve all projects to develop, procure and deploy AI in the public sector to ensure that they meet the required legislative safeguards in terms of data governance, impact assessment, etc. This approach could progressively be relaxed through eg block exemption mechanisms, once there is sufficiently detailed understanding and guidance on specific AI use cases and/or in relation to public sector entities that could demonstrate sufficient in-house capability, eg through a mechanism of independent certification.

20. The new legislation and statutory guidance would need to be self-standing, as the Procurement Bill would not provide the required governance improvements. First, the Procurement Bill pays limited to no attention to artificial intelligence and the digitalisation of procurement.[30] An amendment (46) that would have created minimum requirements on automated decision-making and data ethics was not moved at the Lords Committee stage, and it seems unlikely to be taken up again at later stages of the legislative process. Second, even if the Procurement Bill created minimum substantive requirements, it would lack adequate enforcement mechanisms, not least due to the limited powers and lack of independence of the foreseen Procurement Review Unit (to also sit within Cabinet Office).

_______________________________________
Note: all websites last accessed on 25 October 2022.

[1] Department for Digital, Culture, Media and Sport, Establishing a pro-innovation approach to regulating AI. An overview of the UK’s emerging approach (CP 728, 2022).

[2] Ada Lovelace Institute, AI Now Institute and Open Government Partnership, Algorithmic Accountability for the Public Sector (August 2021) 33.

[3] Committee on Standards in Public Life, Intelligence and Public Standards (2020) 51.

[4] Department for Digital, Culture, Media and Sport, National AI Strategy (CP 525, 2021) 47.

[5] AI Dynamic Purchasing System < https://www.crowncommercial.gov.uk/agreements/RM6200 >.

[6] Office for Artificial Intelligence, Guidelines for AI Procurement (2020) < https://www.gov.uk/government/publications/guidelines-for-ai-procurement/guidelines-for-ai-procurement >.

[7] Central Digital and Data Office, Data Ethics Framework (Guidance) (2020) < https://www.gov.uk/government/publications/data-ethics-framework >.

[8] Central Digital and Data Office, A guide to using artificial intelligence in the public sector (2019) < https://www.gov.uk/government/collections/a-guide-to-using-artificial-intelligence-in-the-public-sector >.

[9] See eg < https://datahazards.com/index.html >.

[10] Ada Lovelace Institute, Algorithmic impact assessment: a case study in healthcare (2022) < https://www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/ >.

[11] A Sanchez-Graells, ‘Algorithmic Transparency: Some Thoughts On UK's First Four Published Disclosures and the Standards’ Usability’ (2022) < https://www.howtocrackanut.com/blog/2022/7/11/algorithmic-transparency-some-thoughts-on-uk-first-disclosures-and-usability >.

[12] A Sanchez-Graells, ‘“Experimental” WEF/UK Guidelines for AI Procurement: Some Comments’ (2019) < https://www.howtocrackanut.com/blog/2019/9/25/wef-guidelines-for-ai-procurement-and-uk-pilot-some-comments >.

[13] See eg Public Accounts Committee, Challenges in implementing digital change (HC 2021-22, 637).

[14] S Klovig Skelton, ‘Public sector aims to close digital skills gap with private sector’ (Computer Weekly, 4 Oct 2022) < https://www.computerweekly.com/news/252525692/Public-sector-aims-to-close-digital-skills-gap-with-private-sector >.

[15] It is a dynamic purchasing system, or a list of pre-screened potential vendors public buyers can use to carry out their own simplified mini-competitions for the award of AI-related contracts.

[16] Above (n 5).

[17] This contrasts with eg the EU project to develop standard contractual clauses for the procurement of AI by public organisations. See < https://living-in.eu/groups/solutions/ai-procurement >.

[18] Centre for Data Ethics and Innovation, Review into bias in algorithmic decision-making (2020) < https://www.gov.uk/government/publications/cdei-publishes-review-into-bias-in-algorithmic-decision-making/main-report-cdei-review-into-bias-in-algorithmic-decision-making >.

[19] V Weghmann and K Sankey, Hollowed out: The growing impact of consultancies in public administrations (2022) < https://www.epsu.org/sites/default/files/article/files/EPSU%20Report%20Outsourcing%20state_EN.pdf >.

[20] A Sanchez-Graells, ‘Identifying Emerging Risks in Digital Procurement Governance’ in idem, Digital Technologies and Public Procurement. Gatekeeping and experimentation in digital public governance (OUP, forthcoming) < https://ssrn.com/abstract=4254931 >.

[21] M E Nielsen and C Østergaard Madsen, ‘Stakeholder influence on technical debt management in the public sector: An embedded case study’ (2022) 39 Government Information Quarterly 101706.

[22] See eg Kevin C Desouza, ‘Artificial Intelligence in the Public Sector: A Maturity Model’ (2021) IBM Centre for the Business of Government < https://www.businessofgovernment.org/report/artificial-intelligence-public-sector-maturity-model >.

[23] A Clarke and S Boots, A Guide to Reforming Information Technology Procurement in the Government of Canada (2022) < https://govcanadacontracts.ca/it-procurement-guide/ >.

[24] < https://ted.europa.eu/udl?uri=TED:NOTICE:600328-2019:HTML:EN:HTML&tabId=1&tabLang=en >.

[25] < https://ted.europa.eu/udl?uri=TED:NOTICE:373610-2020:HTML:EN:HTML&tabId=1&tabLang=en >.

[26] See S Boots, ‘“Charbonneau Loops” and government IT contracting’ (2022) < https://sboots.ca/2022/10/12/charbonneau-loops-and-government-it-contracting/ >.

[27] Central Digital and Data Office, Algorithmic Transparency Standard (2022) < https://www.gov.uk/government/collections/algorithmic-transparency-standard >.

[28] Eg in the context of financial markets, there have been notorious ongoing problems with ensuring adequate quality in corporate and investor disclosures.

[29] < https://artificialintelligenceact.eu/ >.

[30] P Telles, ‘The lack of automation ideas in the UK Gov Green Paper on procurement reform’ (2021) < http://www.telles.eu/blog/2021/1/13/the-lack-of-automation-ideas-in-the-uk-gov-green-paper-on-procurement-reform >.

Some quick thoughts on NHS’s recommendations to Government and Parliament for an NHS Bill

maxresdefault.jpg

On 26 September 2019, NHS England and NHS Improvement Strategy and Innovation Directorate published the "NHS’s recommendations to Government and Parliament for an NHS Bill" supporting the NHS Long-term Plan. This is a document that provides additional details on the initial proposals of 28 February 2019, after the results of a public consultation have been taken into account.

Having read and mulled it over, I think a specific passage of para 96 (in blue) evidences two major misunderstandings underpinning the approach adopted by NHS England and NHS Improvement.

EFtnd_jXUAIxpnI.jpg

First, there is an improper characterisation of the rules in the Public Contracts Regulations 2015 as exceedingly rigid and as preventing procurement of NHS services on the basis of quality and patient experience considerations over price or cost. This flies in the face of reg 67 PCR2015, which explicitly allows for trade-offs between price/cost and quality considerations in the award of *any type* of public contract, as the contracting authority is free to determine what is best value / most economically advantageous. This also ignores i.a. the special award criteria for healthcare and other social services in reg 76 PCR2015 and the extra flexibility this creates, as per the Crown Commercial Service’s guidance, or academic commentary such as eg Pedro Telles and mine.

Second, the subjection of NHS services procurement to PCR2015 rules is attributed to EU law. However, this ignores the UK's unilateral power to exercise discretion under very significant possibilities for structuring NHS governance in a manner that wouldn't trigger those rules. This includes the space for in-house & public-public cooperation under Directive 2014/24/EU, as well as possibility of creating voucher systems underpinning patient choice in a manner that would exclude procurement rules (under Falk Pharma/Tirkonnen, see here).

Ultimately, the totality of the Sept 2019 proposals continues to ignore the origin and implications of the UK's domestic choice of structuring NHS governance around an 'NHS internal market', and solely seek to de-regulate rather than de-marketise the NHS. The same issues I raised in written evidence to the House of Commons Health and Social Care Committee regarding the previous iteration of proposals by NHS England and NHS Improvement remain relevant.

In my opinion, they should be taken into due consideration in the context of scrutinising any future NHS Bill. After all, the new proposals have cherry-picked from the Health and Social Care Committee's report and ignored crucial parts of its recommendations [2] and [7] (see here for more details).

EFtnfYpX0AEvAQV.jpg

Failing to explore all possibilities under current rules (including under EU law) and pushing for the mere de-regulation of the NHS could have severe negative impacts on efficiency and oversight of NHS expenditure. I submit that it would not be in the public interest.

The UK Parliament must force the UK Government to understand the Brexit game before it keeps playing

In terms of Brexit, the week ahead promises to bring new meaning to the ides of March. As clearly explained in last Friday's Commons Library Brexit Briefing, the UK Parliament, and in particular the House of Commons, is faced with a complex set of votes. They have to decide whether to uphold any of the amendments to the European Union (Notification of Withdrawal) Bill (ie "Brexit Bill") introduced by the House of Lords, which concern (a) the status of EU/EEA citizens in the UK, and (b) the legal enshrinement of on a ‘meaningful’ parliamentary vote at the end of the negotiation period. The House of Commons can decide to accept either of these amendments, or rather reject them and put pressure on the House of Lords to backtrack and provide the Government with the "no strings attached" authorisation to keep playing Brexit that David Davis MP has so vocally demanded this weekend.

These are two highly politically charged (and poisonous) issues. They are also highly complex from a legal perspective. More importantly, it must be stressed that they are also very different in nature. The issue of the status of EU/EEA nationals in the UK and UK nationals in the EU/EEA constitutes a known unknown which content is undiscoverable -- because it ultimately depends on future negotiations and, in the absence of explicit political compromises, its legal resolution will depend to a large extent on the ECJ's use of the principle of legitimate expectations in what promises to be protracted and difficult litigation down the line. Differently, the discussion on the possibility of creating a mechanism for 'meaningful' parliamentary decisions after Article 50 TEU has been triggered and, more generally, on whether Parliament can at any later point in time stop or defer the Brexit decision is a known unknown that is however discoverable.

The right time and occasion for such discovery was the Miller litigation before the Supreme Court. However, due to the UK Supreme Court's illegal failure to seek clarification on the implications and (ir)revocability of a notice under Article 50 TEU, this known unknown remains undiscovered. Given this avoidable uncertainty, it is painfully obvious that the debate being had at the UK Parliament is built on no legal foundation whatsoever. Indeed, as the Commons Library put it,

Underlying the whole debate is the unanswered question of whether a withdrawal notification can be suspended or revoked. Although there is a widespread assumption that it cannot, no court has ruled on this and there is considerable opinion that notification could in fact be revoked. The effects of a [parliamentary] vote against a withdrawal agreement (or against leaving without an agreement) would be completely different depending on the answer.

In simple terms, the UK Parliament is now faced with a skewed and asymmetric choice between two options of different legal weight and plausibility and, more importantly, which carry very different risks to the long term interests of the UK and its citizens. On the one hand, assuming irrevocability of an Art 50 TEU notification is a conservative approach to this protracted issue and works as the worse case scenario, and requires Parliament to be ready to approve the Brexit Bill on the basis that a Government's notification to the EU Council carries the (accepted) risk of the UK leaving the EU in two years' time without a deal. This is indeed a realistic scenario, as timely stressed today in the Commons Select Committee on Foreign Affairs' report "Article 50 negotiations: Implications of 'No Deal'". A vote to pass the Brexit Bill explicitly on these terms seems unlikely because MPs can hardly be expected to tell UK citizens that they support Brexit at any cost. However, this is what they would likely be doing, in particular if they passed the Brexit Bill without the House of Lords amendment (b above).

On the other hand, assuming revocability of an Art 50 TEU is a legally very risky strategy that works as a best case scenario, which would allow Parliament to approve the Brexti Bill (with or without the House of Lords amendment) on the hope that they can prevent a calamitous hard Brexit (ie Brexit with no deal) or even a deleterious soft Brexit (ie Brexit with a bad deal) in the future. The problem with this scenario is that it is exceedingly risky and would create a smoke screen to cover the implications of giving an irrevocable notification at this point in time. Moreover, it relies on a moving legal construction that rests either on the Art 50(2) TEU notification being strictly revocable, or in a dynamic understanding of what 'own constitutional requirements' means in Art 50(1) TEU -- to the effect that, as suggested by the now famous "Three Knights Opinion", a conditional notification requiring a further vote in the UK Parliament can be given, even if the condition is not explicitly stated in the notification.

In my view, there are now two options for the UK Parliament to seek to pursue this best case scenario. The first option encompasses a strategy aimed at making it impossible for the UK Government to continue playing Brexit without clarifying whether a scenario where the UK Parliament can have a 'meaningful' vote down the line actually exists, or if it is just normatively-biased wishful legal thinking. In short, to this effect, the UK Parliament needs to approve the Brexit Bill in a way that imposes an obligation on Theresa May PM's Government to notify to the EU Council that a decision to withdraw from the EU has been adopted in principle, but that such decision remains conditional on the UK Parliament's confirmation once the terms of the deal reached at the end of the two year period (or earlier) are settled.

This would, under the duty of sincere cooperation not only make it possible but, in my view, require the EU Council to ask the ECJ whether such notification seemingly in compliance with the UK's own constitutional requirements is a valid notification for the purposes of Art 50 TEU and whether that conditionality binds the EU Institutions and Member States. Rather than hoping for the best in the Irish litigation where Jolyon Maugham QC is trying to achieve this certainty, the way I have just sketched would be the quickest and most guaranteed avenue to (finally) obtain a decision from the ECJ settling the issue once and for all.

The second option is for the UK Parliament to cave in to the existing pressure and authorise the UK Government to give notice unconditionally -- that is, notably, without keeping the amendment introduced by the House of Lords -- and then hope that they got it right when they assumed that the best case scenario was actually in the cards. In my opinion, no responsible member of the UK Parliament (and in particular of the House of Commons) should gamble the long term interests of the UK and its citizens on such optimistic hopes, particularly when there is a way to clear up this uncertainty before it is too late and the process set in motion by an Art 50 TEU notification cannot be legally stopped (under EU law, which is a major risk currently very difficult to assess).

Of course, there would be some short term political cost if the UK Parliament decided to try out the strategy I am proposing. It could be seen as a waste of time if the ECJ's decision on the EU Council's request were to determine that and Art 50 notification can be conditional or revocable. It could also be seen as highly problematic if the ECJ decided the opposite and, after all, the UK Parliament was faced later with the same odious decision that the worse case scenario implies. However, unless the UK Parliament is willing to crash and burn in the worse case scenario, there is value in making the consequences of an irrevocable notification as clear as possible to UK politicians and UK citizens alike. Currently, democratic processes are skewed and distorted by an avoidable legal uncertainty. In my view, it is not wise, nor legitimate, to put pressure on the House of Commons (or later in the House of Lords) to ignore this very significant risk solely in the pursuit of preserving a short term political capital that Theresa May PM and her Government seem too willing to keep for themselves.