'Experimental' WEF/UK Guidelines for AI Procurement: some comments

ⓒ Scott Richard, Liquid painting (2015).

ⓒ Scott Richard, Liquid painting (2015).

On 20 September 2019, and as part of its ‘Unlocking Public Sector Artificial Intelligence’ project, the World Economic Forum (WEF) published the White Paper Guidelines for AI Procurement (see also press release), with which it seeks to help governments accelerate efficiencies through responsible use of artificial intelligence and prepare for future risks. WEF indicated that over the next six months, governments around the world will test and pilot these guidelines (for now, there are indications of adoption in the UK, the United Arab Emirates and Colombia), and that further iterations will be published based on feedback learned on the ground.

Building on previous work on the Data Ethics Framework and the Guide to using AI in the Public Sector, the UK’s Office for Artificial Intelligence has decided to adopt its own draft version of the Guidelines for AI Procurement with substantially the same content, but with modified language and a narrower scope of some principles, in order to link them to the UK’s legislative and regulatory framework (and, in particular, the Data Ethics Framework). The UK will be the first country to trial the guidelines in pilot projects across several departments. The UK Government hopes that the new Guidelines for AI Procurement will help inform and empower buyers in the public sector, helping them to evaluate suppliers, then confidently and responsibly procure AI technologies for the benefit of citizens.

In this post, I offer some first thoughts about the Guidelines for AI Procurement, based on the WEF’s version, which is helpfully summarised in the table below.

Source: WEF, White Paper: ‘Guidelines for AI Procurement’ at 6.

Source: WEF, White Paper: ‘Guidelines for AI Procurement’ at 6.

Some Comments

Generally, it is worth being mindful that the ‘guidelines provide fundamental considerations that a government should address before acquiring and deploying AI solutions and services. They apply once it has been determined that the solution needed for a problem could be AI-driven’ (emphasis in original). As the UK’s version usefully stresses, many of the important decisions take place at the preparation and planning stages, before publishing a contract notice. Therefore, more than guidance for AI procurement, this is guidance on the design of a framework for the governance of innovative digital technologies procurement, including AI (but easily extendable to eg blockchain-based solutions), which will still require a second tier of (future/additional) guidance on the implementation of procurement procedures for the acquisition of AI-based solutions.

It is also worth stressing from the outset that the guidelines assume both the availability and a deep understanding by the contracting authority of the data that can be used to train and deploy the AI solutions, which is perhaps not fully reflective of the existing difficulties concerning the availability and quality of procurement data, and public sector data more generally [for discussion, see A Sanchez-Graells, 'Data-Driven and Digital Procurement Governance: Revisiting Two Well-Known Elephant Tales' (2019) Communications Law, forthcoming]. Where such knowledge is not readily available, it seems likely that the contracting authority may require the prior engagement of data consultants that could carry out an assessment of the data that is or could be available and its potential uses. This creates the need to roll-back some of the considerations included in the guidelines to that earlier stage, much along the lines of the issues concerning preliminary market consultations and the neutralisation of any advantages or conflicts of interest of undertakings involved in pre-tender discussions, which are also common issues with non-AI procurement of innovation. This can be rather tricky, in particular if there is a significant imbalance in expertise around data science and/or a shortfall in those skills in the contracting authority. Therefore, perhaps as a prior recommendation (or an expansion of guideline 7), it may be worth bearing in mind that the public sector needs to invest significant resources in hiring and retaining the necessary in-house capacities before engaging in the acquisition of complex (digital) technologies.

1. Use procurement processes that focus not on prescribing a specific solution, but rather on outlining problems and opportunities and allow room for iteration.

The fit of this recommendation with the existing regulation of procurement procedures seems to point towards either innovation partnerships (for new solutions) or dynamic purchasing systems (for existing or relatively off-the-shelf solutions). The reference to dynamic purchasing systems is slightly odd here, as solutions are unlikely to be susceptible of automatic deployment in any given context.

Moreover, this may not necessarily be the only possible approach under EU law and there seems to be significant scope to channel technology contests under the rules for design contests (Arts 78 and ff of Directive 2014/24/EU). The limited appetite of innovative start-ups for procurement systems that do not provide them with ‘market exposure’ (such as large framework agreements, but likely also dynamic purchasing systems) may be relevant, depending on market conditions (see eg PUBLIC, Buying into the Future. How to Deliver Innovation through Public Procurement (2019) 23). This could create opportunities for broader calls for technological innovation, perhaps as a phase prior to conducting a more structured (and expensive) procurement procedure for an innovation partnership.

All in all, it would seem like—at least at UK level, or in any other jurisdictions seeking to pilot the guidance—it could be advisable to design a standard procurement procedure for AI-related market engagement, in order to avoid having each willing contracting authority having to reinvent the wheel.

2. Define the public benefit of using AI while assessing risks.

Like with many other aspects of the guidelines, one of the difficulties here is to try to establish actionable measures to deal with ‘unknown unknowns’ that may emerge only in the implementation phase, or well into the deployment of the solution. It would be naive to assume that the contracting authority—or the potential tenderers—can anticipate all possible risks and design adequate mitigating strategies. It would thus perhaps be wise to recommend the use of AI solutions for public sector / public service use cases that have a limited impact on individual rights, as a way to gain much necessary expertise and know-how before proceeding to deployment in more sensitive areas.

Moreover, this is perhaps the recommendation that is more difficult to instrument in procurement terms (under the EU rules), as the consideration of ‘public benefit’ seems to be a matter for the contracting authority’s sole assessment, which could eventually lead to a cancellation—with or without retendering—of the procurement. It is difficult to see how to design evaluation tools (in terms of both technical specifications and award criteria) capable of capturing the insight that ‘public benefit extends beyond value for money and also includes considerations about transparency of the decision-making process and other factors that are included in these guidelines’. This should thus likely be built into the procurement process through opportunities for the contracting authority to discontinue the project (with no or limited compensation), which also points towards the structure of the innovation partnership as the regulated procedure most likely to fit.

3. Aim to include your procurement within a strategy for AI adoption across government and learn from others.

This is mainly aimed at ensuring cross-sharing of experiences and at concentrating the need for specific AI-based solutions, which makes sense. The difficulty will be in the practical implementation of this in a quickly-changing setting, which could be facilitated by the creation of a mandatory (not necessarily public) centralised register of AI-based projects, as well as the consideration of the creation and mandatory involvement of a specialised administrative unit. This would be linked to the general comment on the need to invest in skills, but could alleviate the financial impact by making the resources available across Government rather than having each contracting authority create its own expert team.

4. Ensure that legislation and codes of practice are incorporated in the RFP.

Both aspects of this guideline are problematic to a lawyer’s eyes. It is not a matter of legal imperialism to simply consider that there have to be more general mechanisms to ensure that procurement procedures (not only for digital technologies) are fully legally compliant.

The recommendation to carry out a comprehensive review of the legal system to identify all applicable rules and then ‘Incorporate those rules and norms into the RFP by referring to the originating laws and regulations’ does not make a lot of sense, since the inclusion or not in the RFP does not affect the enforceability of those rules, and given the practical impossibility for a contracting authority to assess the entirety of rules applicable to different tenderers, in particular if they are based in other jurisdictions. It would also create all sorts of problems in terms of potential claims of legitimate expectations by tenderers. Moreover, under EU law, there is case law (such as Pizzo and Connexxion Taxi Services) that creates conflicting incentives for the inclusion of specific references to rules and their interpretation in tender documents.

The recommendation on balancing trade secret protection and public interest, including data privacy compliance, is just insufficient and falls well short of the challenge of addressing these complex issues. The tension between general duties of administrative law and the opacity of algorithms (in particular where they are protected by IP or trade secrets protections) is one of the most heated ongoing debates in legal and governance scholarship. It also obviates the need to distinguish between the different rules applicable to the data and to the algorithms, as well as the paramount relevance of the General Data Protection Regulation in this context (at least where EU data is concerned).

5. Articulate the technical feasibility and governance considerations of obtaining relevant data.

This is, in my view, the strongest part of the guidelines. The stress on the need to ensure access to data as a pre-requisite for any AI project and the emphasis and detail put in the design of the relevant data governance structure ahead of the procurement could not be clearer. The difficulty, however, will be in getting most contracting authorities to this level of data-readiness. As mentioned above, the guidelines assume a level of competence that seems too advanced for most contracting authorities potentially interested in carrying out AI-based projects, or that could benefit from them.

6. Highlight the technical and ethical limitations of using the data to avoid issues such as bias.

This guideline is also premised on advanced knowledge and understanding of the data by the contracting authority, and thus creates the same challenges (as further discussed below).

7. Work with a diverse, multidisciplinary team.

Once again, this will be expensive and create some organisational challenges (as also discussed below).

8. Focus throughout the procurement process on mechanisms of accountability and transparency norms.

This is another rather naive and limited aspect of the guidelines, in particular the final point that ‘If an algorithm will be making decisions that affect people’s rights and public benefits, describe how the administrative process would preserve due process by enabling the contestability of automated decision-making in those circumstances.' This is another of the hotly-debated issues surrounding the deployment of AI in the public sector and it seems unlikely that a contracting authority will be able to provide the necessary answers to issues that are yet to be determined—eg the difficult interpretive issues surrounding solely automated processing of personal data under the General Data Protection Regulation, as discussed in eg M Finck, ‘Automated Decision-Making and Administrative Law’ (2019) Max Planck Institute for Innovation and Competition Research Paper No. 19-10.

9. Implement a process for the continued engagement of the AI provider with the acquiring entity for knowledge transfer and long-term risk assessment.

This is another area of general strength in the guidelines, which under EU procurement law should be channeled through stringent contract performance conditions (Art 70 Directive 2014/24/EU) or, perhaps even better, by creating secondary regulation on mandatory on-going support and knowledge transfer for all AI-based implementations in the public sector.

The only aspect of this guideline that is problematic concerns the mention that, in relation to ethical considerations, ‘Bidders should be able not only to describe their approach to the above, but also to provide examples of projects, complete with client references, where these considerations have been followed.’ This would clearly be a problem for new entrants, as well as generate rather significant first-mover advantages for undertakings with prior experience (likely in the private sector). In my view, this should be removed from the guidelines.

10. Create the conditions for a level and fair playing field among AI solution providers.

This section includes significant challenges concerning issues related to the ownership of IP on AI-based solutions. Most of the recommendations seem rather complicated to implement in practice, such as the reference to the need to ‘Consider strategies to avoid vendor lock-in, particularly in relation to black-box algorithms. These practices could involve the use of open standards, royalty-free licensing and public domain publication terms’, or to ‘'consider whether [the] department should own that IP and how it would control it [in particular in the context of evolution or new design of the algorithms]. The arrangements should be mutually beneficial and fair, and require royalty-free licensing when adopting a system that includes IP controlled by a vendor’. These are also extremely complex and debated issues and, once again, it seems unlikely that a contracting authority will be able to provide all relevant answers.

Overall assessment

The main strength of the guidelines lies in its recommendations concerning the evaluation of data availability and quality, as well as the need to create robust data governance frameworks and the need to have a deep insight into data limitations and biases (guidelines 5 and 6). There are also some useful, although rather self-explanatory reminders of basic planning issues concerning the need to ensure the relevant skillset and the unavoidable multidisciplinarity of teams working in AI (guidelines 3 and 7). Similarly, the guidelines provide some very high-level indications on how to structure the procurement process (guidelines 1, 2 and 9), which will however require much more detailed (future/additional) guidance before they can be implemented by a contracting authority.

However, in all other aspects, the guidelines work as an issue-spotting instrument rather than as a guidance tool. This is clearly the case concerning the tensions between data privacy, good administration and proprietary protection of the IP and trade secrets underlying AI-based solutions (guidelines 4, 8 and 10). In my view, rather than taking the naive—and potentially misleading—approach of indicating the issues that contracting authorities need to address (in the RFP, or elsewhere) as if they were currently (easily, or at all) addressable at that level of administrative practice, the guidelines should provide sufficiently precise and goal-oriented recommendations on how to do so if they are to be useful. This is not an easy task and much more work seems necessary before the document can provide useful support to contracting authorities seeking to implement procedures for the procurement of AI-based solutions. I thus wonder how much learning can the guidelines generate in the pilots to be conducted in the UK and elsewhere. For now, I would recommend other governments to wait and see before ‘adopting’ the guidelines or treating them as a useful policy tool, in particular if that discouraged them from carrying out their own efforts in developing actionable guidance on how to procure AI-based solutions.

Finally, it does not take much reading between the lines to realise that the challenges of developing an enabling data architecture and upskilling the public sector (not solely the procurement workforce, and perhaps through specialised units, as a first step) so that it is able to identify the potential for AI-based solutions and to adequately govern their design and implementation remain as very likely stumbling blocks in the road towards deployment of public sector AI. In that regard, general initiatives concerning the availability of quality procurement data and the necessary reform of public procurement teams to fill the data science and programming gaps that currently exist should remain the priority—at least in the EU, as discussed in A Sanchez-Graells, EU Public Procurement Policy and the Fourth Industrial Revolution: Pushing and Pulling as One? (2019) SSRN working paper, and in idem, 'Some public procurement challenges in supporting and delivering smart urban mobility: procurement data, discretion and expertise', in M Finck, M Lamping, V Moscon & H Richter (eds), Smart Urban Mobility – Law, Regulation, and Policy, MPI Studies on Intellectual Property and Competition Law (Berlin, Springer, 2020) forthcoming.

Legal text analytics: some thoughts on where (I think) things stand

Researching the area of artificial intelligence and the law (AI & Law) has currently taken me to the complexities of natural language processing (NLP) applied to legal texts (aka legal text analytics). Trying to understand the extent to which AI can be used to perform automated legal analysis—or, more modestly, to support humans in performing legal analysis—requires (at least) a view of the current possibilities for AI tools to (i) extract information from legal sources (or ‘understand’ them and their relationships), (ii) assess their relevance to a given legal problem and (iii) apply the legal source to provide a legal solution to the problem (or to suggest one for human validation).

Of course, this obviates other issues such as the need for AI to be able to understand the factual situation to formulate the relevant legal problem, to assess or rank different legal solutions where available, or take into account additional aspects such as the likelihood of obtaining a remedy, etc—all of which could be tackled by fields of AI & Law different from legal text analytics. The above also ignores other aspects of ‘understanding’ documents, such as the ability for an algorithm to distinguish factual and legal issues within a legal document (ie a judgment) or to extract basic descriptive information (eg being able to create a citation based on the information in the judgment, or to cluster different types of provisions within a contract or across contracts)—some of which seems to be at hand or soon to be developed on the basis of the recently released Google ‘Document Understanding AI’ tool.

The latest issue of Artificial Intelligence and the Law luckily concentrates on ‘Natural Language Processing for Legal Texts’ and offers some help in trying to understand where things currently stand regarding issues (i) and (ii) above. In this post, I offer some reflections based on my understanding of two of the papers included in the special issue: Nanda et al (2019) and Chalkidis & Kampas (2019). I may have gotten the specific technical details wrong (although I hope not), but I think I got the functional insights.

Establishing relationships between legal sources

One of the problems that legal text analytics is trying to solve concerns establishing relationships between different legal sources—which can be a partial aspect of the need to ‘understand’ them (issue (i) above). This is the main problem discussed in Nanda et al, 'Unsupervised and supervised text similarity systems for automated identification of national implementing measures of European directives' (2019) 27(2) Artificial Intelligence and Law 199-225. In this piece of research, AI is used to establish whether a provision of a national implementing measure (NIM) transposes a specific article of an EU Directive or not. In extremely simplified terms, the researchers train different algorithms to perform text comparison. The researchers work on a closed list of 43 EU Directives and the corresponding Luxembuorgian, Irish and Italian NIMs. The following table plots their results.

Nanda et al (2019: 208, Figure 6).

The table shows that the best AI solution developed by the researchers (the TF-IDF cosine) achieves levels of precision of around 83% for Luxembourg, 77% for Italy and 68% for Ireland. These seem like rather impressive results but a qualitative analysis of their experiment indicates that the significantly better performance for Luxembourgian transposition over Italian or Irish transposition likely results from the fact that Luxembourg tends to largely ‘copy & paste’ EU Directives into national law, whereas the Italian and Irish legislators adopt a more complex approach to the integration of EU rules into their existing legal instruments.

Moreover, it should be noted that the algorithms are working on a very specific issue, as they are only assessing the correspondence between provisions of EU and NIM instruments that were related—that is, they are operating in a closed or walled dataset that does not include NIMs that do not transpose any of the 43 chosen Directives. Once these aspects of the research design are taken into account, there are a number of unanswered questions, such as the precision that the algorithms would have if they had to compare entire NIMs against an open-ended list of EU Directives, or if they were used to screen for transposition rules. While the first issue could probably be answered simply extending the experiment, the second issue would probably require a different type of AI design.

On the whole, my impression after reading this interesting piece of research is that AI is still relatively far from a situation where it can provide reliable answers to the issue of establishing relationships across legal sources, particularly if one thinks of relatively more complex relationships than transposition within the EU context, such as development, modification or repeal of a given set of rules by other (potentially dispersed) rules.

Establishing relationships between legal problems and legal sources

A separate but related issue requires AI to identify legal sources that could be relevant to solve a specific legal problem (issue (ii) above)—that is, the relevant relationship is not across legal sources (as above), but between a legal problem or question and relevant legal sources.

This is covered in part of the literature review included in Chalkidis & Kampas, ‘Deep learning in law: early adaptation and legal word embeddings trained on large corpora‘ (2019) 27(2) Artificial Intelligence and Law 171-198 (see esp 188-194), where they discuss some of the solutions given to the task of the Competition on Legal Information Extraction/Entailment (COLIEE) from 2014 to 2017, which focused ‘on two aspects related to a binary (yes/no) question answering as follows: Phase one of the legal question answering task involves reading a question Q and extract[ing] the legal articles of the Civil Code that are relevant to the question. In phase two the systems should return a yes or no answer if the retrieved articles from phase one entail or not the question Q’.

The paper covers four different attempts at solving the task. It reports that the AI solutions developed to address the two binary questions achieved the following levels of precision: 66.67% (Morimoto et al. (2017)); 63.87% (Kim et al. (2015)); 57.6% (Do et al. (2017)); 53.8% (Nanda et al. (2017)). Once again, these results are rather impressive but some contextualisation may help to assess the extent to which this can be useful in legal practice.

The best AI solution was able to identify relevant provisions that entailed the relevant question 2 out of 3 times. However, the algorithms were once again working on a closed or walled field because they solely had to search for relevant provisions in the Civil Code. One can thus wonder whether algorithms confronted with the entirety of a legal order would be able to reach even close degrees of accuracy.

Some thoughts

Based on the current state of legal text analytics (as far as I can see it), it seems clear that AI is far from being able to perform independent/unsupervised legal analysis and provide automated solutions to legal problems (issue (iii) above) because there are still very significant shortcomings concerning issues of ‘understanding’ natural language legal texts (issue (i)) and adequately relating them to specific legal problems (issue (ii)). That should not be surprising.

However, what also seems clear is that AI is very far from being able to confront the vastness of a legal order and that, much as lawyers themselves, AI tools need to specialise and operate within the narrower boundaries of sub-domains or quite contained legal fields. When that is the case, AI can achieve much higher degrees of precision—see examples of information extraction precision above 90% in Chalkidis & Kampas (2019: 194-196) in projects concerning Chinese credit fraud judgments and Canadian immigration rules.

Therefore, the current state of legal text analytics seems to indicate that AI is (quickly?) reaching a point where algorithms can be used to extract legal information from natural language text sources within a specified legal field (which needs to be established through adequate supervision) in a way that allows it to provide fallible or incomplete lists of potentially relevant rules or materials for a given legal issue. However, this still requires legal experts to complement the relevant searches (to bridge any gaps) and to screen the proposed materials for actual relevance. In that regard, AI does hold the promise of much better results than previous expert systems and information retrieval systems and, where adequately trained, it can support and potentially improve legal research (ie cognitive computing, along the lines developed by Ashley (2017)). However, in my view, there are extremely limited prospects for ‘independent functionality’ of legaltech solutions. I would happily hear arguments to the contrary, though!

New paper: ‘Screening for Cartels’ in Public Procurement: Cheating at Solitaire to Sell Fool’s Gold?

I have uploaded a new paper on SSRN, where I critically assess the bid rigging screening tool published by the UK’s Competition and Markets Authority in 2017. I will be presenting it in a few weeks at the V Annual meeting of the Spanish Academic Network for Competition Law. The abstract is as follows:

Despite growing global interest in the use of algorithmic behavioural screens, big data and machine learning to detect bid rigging in procurement markets, the UK’s Competition and Markets Authority (CMA) was under no obligation to undertake a project in this area, much less to publish a bid-rigging algorithmic screening tool and make it generally available. Yet, in 2017 and under self-imposed pressure, the CMA released ‘Screening for Cartels’ (SfC) as ‘a tool to help procurers screen their tender data for signs of illegal bid-rigging activity’ and has since been trying to raise its profile internationally. There is thus a possibility that the SfC tool is not only used by UK public buyers, but also disseminated and replicated in other jurisdictions seeking to implement ‘tried and tested’ solutions to screen for cartels. This paper argues that such a legal transplant would be undesirable.

In order to substantiate this main claim, and after critically assessing the tool, the paper tracks the origins of the indicators included in the SfC tool to show that its functionality is rather limited as compared with alternative models that were put to the CMA. The paper engages with the SfC tool’s creation process to show how it is the result of poor policy-making based on the material dismissal of the recommendations of the consultants involved in its development, and that this has resulted in the mere illusion that big data and algorithmic screens are being used to detect bid rigging in the UK. The paper also shows that, as a result of the ‘distributed model’ used by the CMA, the algorithms underlying the SfC tool cannot improved through training, the publication of the SfC tool lowers the likelihood of some types of ‘easy to spot cases’ by signalling areas of ‘cartel sophistication’ that can bypass its tests and that, on the whole, the tool is simply not fit for purpose. This situation is detrimental to the public interest because reliance in a defective screening tool can create a false perception of competition for public contracts, and because it leads to immobilism that delays (or prevents) a much-needed engagement with the extant difficulties in developing a suitable algorithmic screen based on proper big data analytics. The paper concludes that competition or procurement authorities willing to adopt the SfC tool would be buying fool’s gold and that the CMA was wrong to cheat at solitaire to expedite the deployment of a faulty tool.

The full citation of the paper is: Sanchez-Graells, Albert, ‘Screening for Cartels’ in Public Procurement: Cheating at Solitaire to Sell Fool’s Gold? (May 3, 2019). Available at SSRN: https://ssrn.com/abstract=3382270

An incomplete overview of (the promises of) GovTech: some thoughts on Engin & Treleaven (2019)

I have just read the interesting paper by Z Engin & P Treleaven, 'Algorithmic Government: Automating Public Services and Supporting Civil Servants in using Data Science Technologies' (2019) 62(3) The Computer Journal 448–460, https://doi.org/10.1093/comjnl/bxy082 (available on open access). The paper offers a very useful, but somehow inaccurate and slightly incomplete, overview of data science automation being deployed by governments world-wide (ie GovTech), including the technologies of artificial intelligence (AI), Internet of Things (IoT), big data, behavioral/predictive analytics, and blockchain. I found their taxonomy of GovTech services particularly thought-provoking.

Source: Engin & Treleaven (2019: 449).

Source: Engin & Treleaven (2019: 449).

In the eyes of a lawyer, the use of the word ‘Government’ to describe all these activities is odd, in particular concerning the category ‘Statutes and Compliance’ (at least on the Statutes part). Moving past that conceptual issue—which reminds us once more of the need for more collaboration between computer scientist and social scientists, including lawyers—the taxonomy still seems difficult to square with an analysis of the use of GovTech for public procurement governance and practice. While some of its aspects could be subsumed as tools to ‘Support Civil Servants’ or under ‘National Public Records’, the transactional aspects of public procurement and the interaction with public contractors seem more difficult to place in this taxonomy (even if the category of ‘National Physical Infrastructure’ is considered). Therefore, either additional categories or more granularity is needed in order to have a more complete view of the type of interactions between technology and public sector activity (broadly defined).

The paper is also very limited regarding LawTech, as it primarily concentrates on online dispute resolution (ODR) mechanisms, which is only a relatively small aspect of the potential impact of data science automation on the practice of law. In that regard, I would recommend reading the (more complex, but very useful) book by K D Ashley, Artificial Intelligence and Legal Analytics. New Tools for Law Practice in the Digital Age (Cambridge, CUP, 2017).

I would thus recommend reading Engin & Treleaven (2019) with an open mind, and using it more as a collection of examples than a closed taxonomy.

Procurement governance and complex technologies: a promising future?

Thanks to the UK’s Procurement Lawyers’ Association (PLA) and in particular Totis Kotsonis, on Wednesday 6 March 2019, I will have the opportunity to present some of my initial thoughts on the potential impact of complex technologies on procurement governance.

In the presentation, I will aim to critically assess the impacts that complex technologies such as blockchain (or smart contracts), artificial intelligence (including big data) and the internet of things could have for public procurement governance and oversight. Taking the main risks of maladministration of the procurement function (corruption, discrimination and inefficiency) on which procurement law is based as the analytical point of departure, the talk will explore the potential improvements of governance that different complex technologies could bring, as well as any new governance risks that they could also generate.

The slides I will use are at the end of this post. Unfortunately, the hyperlinks do not work, so please email me if you are interested in a fully-accessible presentation format (a.sanchez-graells@bristol.ac.uk).

The event is open to non-PLA members. So if you are in London and fancy joining the conversation, please register following the instructions in the PLA’s event page.