Meaning, AI, and procurement -- some thoughts

©Ausrine Kuze, Distorted Reality, 2021.

James McKinney and Volodymyr Tarnay of the Open Contracting Partnership have published ‘A gentle introduction to applying AI in procurement’. It is a very accessible and helpful primer on some of the most salient issues to be considered when exploring the possibility of using AI to extract insights from procurement big data.

The OCP introduction to AI in procurement provides helpful pointers in relation to task identification, method, input, and model selection. I would add that an initial exploration of the possibility to deploy AI also (and perhaps first and foremost) requires careful consideration of the level of precision and the type (and size) of errors that can be tolerated in the specific task, and ways to test and measure it.

One of the crucial and perhaps more difficult to understand issues covered by the introduction is how AI seeks to capture ‘meaning’ in order to extract insights from big data. This is also a controversial issue that keeps coming up in procurement data analysis contexts, and one that triggered some heated debate at the Public Procurement Data Superpowers Conference last week—where, in my view, companies selling procurement insight services were peddling hyped claims (see session on ‘Transparency in public procurement - Data readability’).

In this post, I venture some thoughts on meaning, AI, and public procurement big data. As always, I am very interested in feedback and opportunities for further discussion.

Meaning

Of course, the concept of meaning is complex and open to philosophical, linguistic, and other interpretations. Here I take a relatively pedestrian and pragmatic approach and, following the Cambridge dictionary, consider two ways in which ‘meaning’ is understood in plain English: ‘the meaning of something is what it expresses or represents’, and meaning as ‘importance or value’.

To put it simply, I will argue that AI cannot capture meaning proper. It can carry complex analysis of ‘content in context’, but we should not equate that with meaning. This will be important later on.

AI, meaning, embeddings, and ‘content in context’

The OCP introduction helpfully addresses this issue in relation to an example of ‘sentence similarity’, where the researchers are looking for phrases that are alike in tender notices and predefined green criteria, and therefore want to use AI to compare sentences and assign them a similarity score. Intuitively, ‘meaning’ would be important to the comparison.

The OCP introduction explains that:

Computers don’t understand human language. They need to operate on numbers. We can represent text and other information as numerical values with vector embeddings. A vector is a list of numbers that, in the context of AI, helps us express the meaning of information and its relationship to other information.

Text can be converted into vectors using a model. [A sentence transformer model] converts a sentence into a vector of 384 numbers. For example, the sentence “don’t panic and always carry a towel” becomes the numbers 0.425…, 0.385…, 0.072…, and so on.

These numbers represent the meaning of the sentence.

Let’s compare this sentence to another: “keep calm and never forget your towel” which has the vector (0.434…, 0.264…, 0.123…, …).

One way to determine their similarity score is to use cosine similarity to calculate the distance between the vectors of the two sentences. Put simply, the closer the vectors are, the more alike the sentences are. The result of this calculation will always be a number from -1 (the sentences have opposite meanings) to 1 (same meaning). You could also calculate this using other trigonometric measures such as Euclidean distance.

For our two sentences above, performing this mathematical operation returns a similarity score of 0.869.

Now let’s consider the sentence “do you like cheese?” which has the vector (-0.167…, -0.557…, 0.066…, …). It returns a similarity score of 0.199. Hooray! The computer is correct!

But, this method is not fool-proof. Let’s try another: “do panic and never bring a towel” (0.589…, 0.255…, 0.0884…, …). The similarity score is 0.857. The score is high, because the words are similar… but the logic is opposite!

I think there are two important observations in relation to the use of meaning here (highlighted above).

First, meaning can hardly be captured where sentences with opposite logic are considered very similar. This is because the method described above (vector embedding) does not capture meaning. It captures content (words) in context (around other words).

Second, it is not possible to fully express in numbers what text expresses or represents, or its importance or value. What the vectors capture is the representation or expression of such meaning, the representation of its value and importance through the use of those specific words in the particular order in which they are expresssed. The string of numbers is thus a second-degree representation of the meaning intended by the words; it is a numerical representation of the word representation, not a numerical representation of the meaning.

Unavoidably, there is plenty scope for loss, alteration or even inversion of meaning when it goes through multiple imperfect processes of representation. This means that the more open textured the expression in words and the less contextualised in its presentation, the more difficult it is to achieve good results.

It is important to bear in mind that the current techniques based on this or similar methods, such as those based on large language models, clearly fail on crucial aspects such as their factuality—which ultimately requires checking whether something with a given meaning is true or false.

This is a burgeoning area of technnical research but it seems that even the most accurate models tend to hover around 70% accuracy, save in highly contextual non-ambiguous contexts (see eg D Quelle and A Bovet, ‘The perils and promises of fact-checking with large language models’ (2024) 7 Front. Artif. Intell., Sec. Natural Language Processing). While this is an impressive feature of these tools, it can hardly be acceptable to extrapolate that these tools can be deployed for tasks that require precision and factuality.

Procurement big data and ‘content and context’

In some senses, the application of AI to extract insights from procurement big data is well suited to the fact that, by and large, existing procurement data is very precisely contextualised and increasingly concerns structured content—that is, that most of the procurement data that is (increasingly) available is captured in structured notices and tends to have a narrowly defined and highly contextual purpose.

From that perspective, there is potential to look for implementations of advanced comparisons of ‘content in context’. But this will most likely have a hard boundary where ‘meaning’ needs to be interpreted or analysed, as AI cannot perform that task. At most, it can help gather the information, but it cannot analyse it because it cannot ‘understand’ it.

Policy implications

In my view, the above shows that the possibility of using AI to extract insights from procurement big data needs to be approched with caution. For tasks where a ‘broad brush’ approach will do, these can be helpful tools. They can help mitigate the informational deficit procurement policy and practice tend to encounter. As put in the conference last week, these tools can help get a sense of broad trends or directions, and can thus inform policy and decision-making only in that regard and to that extent. Conversely, AI cannot be used in contexts where precision is important and where errors would affect important rights or interests.

This is important, for example, in relation to the fascination that AI ‘business insights’ seems to be triggering amongst public buyers. One of the issues that kept coming up concerns why contracting authorities cannot benefit from the same advances that are touted as being offered to (private) tenderers. The case at hand was that of identifying ‘business opportunities’.

A number of companies are using AI to support searches for contract notices to highlight potentially interesting tenders to their clients. They offer services such as ‘tender summaries’, whereby the AI creates a one-line summary on the basis of a contract notice or a tender description, and this summary can be automatically translated (eg into English). They also offer search services based on ‘capturing meaning’ from a company’s website and matching it to potentially interesting tender opportunities.

All these services, however, are at bottom a sophisticated comparison of content in context, not of meaning. And these are deployed to go from more to less information (summaries), which can reduce problems with factuality and precision except in extreme cases, and in a setting where getting it wrong has only a marginal cost (ie the company will set aside the non-interesting tender and move on). This is also an area where expectations can be managed and where results well below 100% accuracy can be interesting and have value.

The opposite does not apply from the perspective of the public buyer. For example, a summary of a tender is unlikely to have much value as, with all likelihood, the summary will simply confirm that the tender matches the advertised object of the contract (which has no value, differently from a summary suggesting a tender matches the business activities of an economic operator). Moreover, factuality is extremely important and only 100% accuracy will do in a context where decision-making is subject to good administration guarantees.

Therefore, we need to be very careful about how we think about using AI to extract insights from procurement (big) data and, as the OCP introduction highlights, one of the most important things is to clearly define the task for which AI would be used. In my view, there are much more limited tasks than one could dream up if we let our collective imagination run high on hype.

Procurement centralisation, digital technologies and competition (new working paper)

Source: Wikipedia.

I have just uploaded on SSRN the new working paper ‘Competition Implications of Procurement Digitalisation and the Procurement of Digital Technologies by Central Purchasing Bodies’, which I will present at the conference on “Centralization and new trends" to be held at the University of Copenhagen on 25-26 April 2023 (there is still time to register!).

The paper builds on my ongoing research on digital technologies and procurement governance, and focuses on the interaction between the strategic goals of procurement centralisation and digitalisation set by the European Commission in its 2017 public procurement strategy.

The paper identifies different ways in which current trends of procurement digitalisation and the challenges in procuring digital technologies push for further procurement centralisation. This is in particular to facilitate the extraction of insights from big data held by central purchasing bodies (CPBs); build public sector digital capabilities; and boost procurement’s regulatory gatekeeping potential. The paper then explores the competition implications of this technology-driven push for further procurement centralisation, in both ‘standard’ and digital markets.

The paper concludes by stressing the need to bring CPBs within the remit of competition law (which I had already advocated eg here), the opportunity to consider allocating CPB data management to a separate competent body under the Data Governance Act, and the related need to develop an effective system of mandatory requirements and external oversight of public sector digitalisation processes, specially to constrain CPBs’ (unbridled) digital regulatory power.

The full working paper reference is: A Sanchez-Graells, Albert, ‘Competition Implications of Procurement Digitalisation and the Procurement of Digital Technologies by Central Purchasing Bodies’ (March 2, 2023), Available at SSRN: https://ssrn.com/abstract=4376037. As always, any feedback most welcome: a.sanchez-graells@bristol.ac.uk.

Digital procurement governance: drawing a feasibility boundary

In the current context of generalised quick adoption of digital technologies across the public sector and strategic steers to accelerate the digitalisation of public procurement, decision-makers can be captured by techno hype and the ‘policy irresistibility’ that can ensue from it (as discussed in detail here, as well as here).

To moderate those pressures and guide experimentation towards the successful deployment of digital solutions, decision-makers must reassess the realistic potential of those technologies in the specific context of procurement governance. They must also consider which enabling factors must be put in place to harness the potential of the digital technologies—which primarily relate to an enabling big data architecture (see here). Combined, the data requirements and the contextualised potential of the technologies will help decision-makers draw a feasibility boundary for digital procurement governance, which should inform their decisions.

In a new draft chapter (num 7) for my book project, I draw such a technology-informed feasibility boundary for digital procurement governance. This post provides a summary of my main findings, on which I will welcome any comments: a.sanchez-graells@bristol.ac.uk. The full draft chapter is free to download: A Sanchez-Graells, ‘Revisiting the promise: A feasibility boundary for digital procurement governance’ to be included in A Sanchez-Graells, Digital Technologies and Public Procurement. Gatekeeping and experimentation in digital public governance (OUP, forthcoming). Available at SSRN: https://ssrn.com/abstract=4232973.

Data as the main constraint

It will hardly be surprising to stress again that high quality big data is a pre-requisite for the development and deployment of digital technologies. All digital technologies of potential adoption in procurement governance are data-dependent. Therefore, without adequate data, there is no prospect of successful adoption of the technologies. The difficulties in generating an enabling procurement data architecture are detailed here.

Moreover, new data rules only regulate the capture of data for the future. This means that it will take time for big data to accumulate. Accessing historical data would be a way of building up (big) data and speeding up the development of digital solutions. Moreover, in some contexts, such as in relation with very infrequent types of procurement, or in relation to decisions concerning previous investments and acquisitions, historical data will be particularly relevant (eg to deploy green policies seeking to extend the use life of current assets through programmes of enhanced maintenance or refurbishment; see here). However, there are significant challenges linked to the creation of backward-looking digital databases, not only relating to the cost of digitisation of the information, but also to technical difficulties in ensuring the representativity and adequate labelling of pre-existing information.

An additional issue to consider is that a number of governance-relevant insights can only be extracted from a combination of procurement and other types of data. This can include sources of data on potential conflict of interest (eg family relations, or financial circumstances of individuals involved in decision-making), information on corporate activities and offerings, including detailed information on products, services and means of production (eg in relation with licensing or testing schemes), or information on levels of utilisation of public contracts and satisfaction with the outcomes by those meant to benefit from their implementation (eg users of a public service, or ‘internal’ users within the public administration).

To the extent that the outside sources of information are not digitised, or not in a way that is (easily) compatible or linkable with procurement information, some data-based procurement governance solutions will remain undeliverable. Some developments in digital procurement governance will thus be determined by progress in other policy areas. While there are initiatives to promote the availability of data in those settings (eg the EU’s Data Governance Act, the Guidelines on private sector data sharing, or the Open Data Directive), the voluntariness of many of those mechanisms raises important questions on the likely availability of data required to develop digital solutions.

Overall, there is no guarantee that the data required for the development of some (advanced) digital solutions will be available. A careful analysis of data requirements must thus be a point of concentration for any decision-maker from the very early stages of considering digitalisation projects.

Revised potential of selected digital technologies

Once (or rather, if) that major data hurdle is cleared, the possibilities realistically brought by the functionality of digital technologies need to be embedded in the procurement governance context, which results in the following feasibility boundary for the adoption of those technologies.

Robotic Process Automation (RPA)

RPA can reduce the administrative costs of managing pre-existing digitised and highly structured information in the context of entirely standardised and repetitive phases of the procurement process. RPA can reduce the time invested in gathering and cross-checking information and can thus serve as a basic element of decision-making support. However, RPA cannot increase the volume and type of information being considered (other than in cases where some available information was not being taken into consideration due to eg administrative capacity constraints), and it can hardly be successfully deployed in relation to open-ended or potentially contradictory information points. RPA will also not change or improve the processes themselves (unless they are redesigned with a view to deploying RPA).

This generates a clear feasibility boundary for RPA deployment, which will generally have as its purpose the optimisation of the time available to the procurement workforce to engage in information analysis rather than information sourcing and basic checks. While this can clearly bring operational advantages, it will hardly transform procurement governance.

Machine Learning (ML)

Developing ML solutions will pose major challenges, not only in relation to the underlying data architecture (as above), but also in relation to specific regulatory and governance requirements specific to public procurement. Where the operational management of procurement does not diverge from the equivalent function in the (less regulated) private sector, it will be possible to see the adoption or adaptation of similar ML solutions (eg in relation to category spend management). However, where there are regulatory constraints on the conduct of procurement, the development of ML solutions will be challenging.

For example, the need to ensure the openness and technical neutrality of procurement procedures will limit the possibilities of developing recommender systems other than in pre-procured closed lists or environments based on framework agreements or dynamic purchasing systems underpinned by electronic catalogues. Similarly, the intended use of the recommender system may raise significant legal issues concerning eg the exercise of discretion, which can limit their deployment to areas of information exchange or to merely suggestion-based tasks that could hardly replace current processes and procedures. Given the limited utility (or acceptability) of collective filtering recommender solutions (which is the predominant type in consumer-facing private sector uses, such as Netflix or Amazon), there are also constraints on the generality of content-based recommender systems for procurement applications, both at tenderer and at product/service level. This raises a further feasibility issue, as the functional need to develop a multiplicity of different recommenders not only reopens the issue of data sufficiency and adequacy, but also raises questions of (economic and technical) viability. Recommender systems would mostly only be susceptible of feasible adoption in highly centralised procurement settings. This could create a push for further procurement centralisation that is not neutral from a governance perspective, and that can certainly generate significant competition issues of a similar nature, but perhaps a different order of magnitude, than procurement centralisation in a less digitally advanced setting. This should be carefully considered, as the knock-on effects of the implementation of some ML solutions may only emerge down the line.

Similarly, the development and deployment of chatbots is constrained by specific regulatory issues, such as the need to deploy closed domain chatbots (as opposed to open domain chatbots, ie chatbots connected to the Internet, such as virtual assistants built into smartphones), so that the information they draw from can be controlled and quality assured in line with duties of good administration and other legal requirements concerning the provision of information within tender procedures. Chatbots are suited to types of high-volume information-based queries only. They would have limited applicability in relation to the specific characteristics of any given procurement procedure, as preparing the specific information to be used by the chatbot would be a challenge—with the added functionality of the chatbot being marginal. Chatbots could facilitate access to pre-existing and curated simple information, but their functionality would quickly hit a ceiling as the complexity of the information progressed. Chatbots would only be able to perform at a higher level if they were plugged to a knowledge base created as an expert system. But then, again, in that case their added functionality would be marginal. Ultimately, the practical space for the development of chatbots is limited to low added value information access tasks. Again, while this can clearly bring operational advantages, it will hardly transform procurement governance.

ML could facilitate the development and deployment of ‘advanced’ automated screens, or red flags, which could identify patterns of suspicious behaviour to then be assessed against the applicable rules (eg administrative and criminal law in case of corruption, or competition law, potentially including criminal law, in case of bid rigging) or policies (eg in relation to policy requirements to comply with specific targets in relation to a broad variety of goals). The trade off in this type of implementation is between the potential (accuracy) of the algorithmic screening and legal requirements on the explainability of decision-making (as discussed in detail here). Where the screens were not used solely for policy analysis, but acting on the red flag carried legal consequences (eg fines, or even criminal sanctions), the suitability of specific types of ML solutions (eg unsupervised learning solutions tantamount to a ‘black box’) would be doubtful, challenging, or altogether excluded. In any case, the development of ML screens capable of significantly improving over RPA-based automation of current screens is particularly dependent on the existence of adequate data, which is still proving an insurmountable hurdle in many an intended implementation (as above).

Distributed ledger technology (DLT) systems and smart contracts

Other procurement governance constraints limit the prospects of wholesale adoption of DLT (or blockchain) technologies, other than for relatively limited information management purposes. The public sector can hardly be expected to adopt DLT solutions that are not heavily permissioned, and that do not include significant safeguards to protect sensitive, commercially valuable, and other types of information that cannot be simply put in the public domain. This means that the public sector is only likely to implement highly centralised DLT solutions, with the public sector granting permissions to access and amend the relevant information. While this can still generate some (degrees of) tamper-evidence and permanence of the information management system, the net advantage is likely to be modest when compared to other types of secure information management systems. This can have an important bearing on decisions whether DLT solutions meet cost effectiveness or similar criteria of value for money controlling their piloting and deployment.

The value proposition of DLT solutions could increase if they enabled significant procurement automation through smart contracts. However, there are massive challenges in translating procurement procedures to a strict ‘if/when ... then’ programmable logic, smart contracts have limited capability that is not commensurate with the volumes and complexity of procurement information, and their development would only be justified in contexts where a given smart contract (ie specific programme) could be used in a high number of procurement procedures. This limits its scope of applicability to standardised and simple procurement exercises, which creates a functional overlap with some RPA solutions. Even in those settings, smart contracts would pose structural problems in terms of their irrevocability or automaticity. Moreover, they would be unable to generate off-chain effects, and this would not be easily sorted out even with the inclusion of internet of things (IoT) solutions or software oracles. This comes to largely restrict smart contracts to an information exchange mechanism, which does not significantly increase the value added by DLT plus smart contract solutions for procurement governance.

Conclusion

To conclude, there are significant and difficult to solve hurdles in generating an enabling data architecture, especially for digital technologies that require multiple sources of information or data points regarding several phases of the procurement process. Moreover, the realistic potential of most technologies primarily concerns the automation of tasks not involving data analysis of the exercise of procurement discretion, but rather relatively simple information cross-checks or exchanges. Linking back to the discussion in the earlier broader chapter (see here), the analysis above shows that a feasibility boundary emerges whereby the adoption of digital technologies for procurement governance can make contributions in relation to its information intensity, but not easily in relation to its information complexity, at least not in the short to medium term and not in the absence of a significant improvement of the required enabling data architecture. Perhaps in more direct terms, in the absence of a significant expansion in the collection and curation of data, digital technologies can allow procurement governance to do more of the same or to do it quicker, but it cannot enable better procurement driven by data insights, except in relatively narrow settings. Such settings are characterised by centralisation. Therefore, the deployment of digital technologies can be a further source of pressure towards procurement centralisation, which is not a neutral development in governance terms.

This feasibility boundary should be taken into account in considering potential use cases, as well as serve to moderate the expectations that come with the technologies and that can fuel ‘policy irresistibility’. Further, it should be stressed that those potential advantages do not come without their own additional complexities in terms of new governance risks (eg data and data systems integrity, cybersecurity, skills gaps) and requirements for their mitigation. These will be explored in the next stage of my research project.

Urgent: 'no eForms, no fun' -- getting serious about building a procurement data architecture in the EU

EU Member States only have about one year to make crucial decisions that will affect the procurement data architecture of the EU and the likelihood of successful adoption of digital technologies for procurement governance for years or decades to come’. Put like that, the relevance of the approaching deadline for the national implementation of new procurement eForms may grab more attention than the alternative statement that ‘in just about a year, new eForms will be mandatory for publication of procurement notices in TED’.

This latter more technical (obscure, and uninspiring?) understanding of the new eForms seems to have been dominating the approach to eForms implementation, which does not seem to have generally gained a high profile in domestic policy-making at EU Member State level despite the Publications Office’s efforts.

In this post, I reflect about the strategic importance of the eForms implementation for the digitalisation of procurement, the limited incentives for an ambitious implementation that stem from the voluntary approach of the most innovative aspects of the new eForms, and the opportunity that would be lost with a minimalistic approach to compliance with the new rules. I argue that it is urgent for EU Member States to get serious about building a procurement data architecture that facilitates the uptake of digital technologies for procurement governance across the EU, which requires an ambitious implementation of eForms beyond their minimum mandatory requirements.

eForms: some background

The EU is in the process of reforming the exchange of information about procurement procedures. This information exchange is mandated by the EU procurement rules, which regulate a variety of procurement notices with the two-fold objective of (i) fostering cross-border competition for public contracts and (ii) facilitating the oversight of procurement practices by the Member States, both in relation to the specific procedure (eg to enable access to remedies) and from a broad policy perspective (eg through the Single Market Scoreboard). In other words, this information exchange underpins the EU’s approach to procurement transparency, which mainly translates into publication of notices in the Tenders Electronic Daily (TED).

A 2019 Implementing Regulation established new standard forms for the publication of notices in the field of public procurement (eForms). The Implementing Regulation is accompanied by a detailed Implementation Handbook. The transition to eForms is about to hit a crucial milestone with the authorisation for their voluntary use from 14 November 2022, in parallel with the continued use of current forms. Following that, eForms will be mandatory and the only accepted format for publication of TED notices from 25 October 2023. There will thus have been a very long implementation period (of over four years), including an also lengthy (11-month) experimentation period about to start. This contrasts with previous revisions of the TED templates, which had given under six months’ notice (eg in 2015) or even just a 20-day implementation period (eg in 2011). This extended implementation period is reflective of the fact that the transition of eForms is not merely a matter of replacing a set of forms with another.

Indeed, eForms are not solely the new templates for the collection of information to be published in TED. eForms represent the EU’s open standard for publishing public procurement data — or, in other words, the ‘EU OCDS’ (which goes much beyond the OCDS mapping of the current TED forms). The importance of the implementation of a new data standard has been highlighted at strategic level, as this is the cornerstone of the EU’s efforts to improve the availability and quality of procurement data, which remain suboptimal (to say the least) despite continued efforts to improve the quality and (re)usability of TED data.

In that regard, the 2020 European strategy for data, emphasised that ‘Public procurement data are essential to improve transparency and accountability of public spending, fighting corruption and improving spending quality. Public procurement data is spread over several systems in the Member States, made available in different formats and is not easily possible to use for policy purposes in real-time. In many cases, the data quality needs to be improved.’ The European Commission now stresses how ‘eForms are at the core of the digital transformation of public procurement in the EU. Through the use of a common standard and terminology, they can significantly improve the quality and analysis of data’ (emphasis added).

It should thus be clear that the eForms implementation is not only about low level form-filling, but also (or primarily) about building a procurement data architecture that facilitates the uptake of digital technologies for procurement governance across the EU. Therefore, the implementation of eForms and the related data standard seeks to achieve two goals: first, to ensure the data quality (eg standardisation, machine-readability) required to facilitate its automated treatment for the purposes of publication of procurement notices mandated by EU law (ie their primary use); and, second, to build a data architecture that can facilitate the accumulation of big data so that advanced data analytics can be deployed by re-users of procurement data. This second(ary) goal is particularly relevant to our discussion. This requires some unpacking.

The importance of data for the deployment of digital technologies

It is generally accepted that quality (big) data is the primary requirement for the deployment of digital technologies to extract data-driven insights, as well as to automate menial back-office tasks. In a detailed analysis of these technologies, I stress the relevance of procurement data across technological solutions that could be deployed to improve procurement governance. In short, the outcome of robotic process automation (RPA) can only be as good as its sources of information, and adequate machine learning (ML) solutions can only be trained on high-quality big data—which thus conditions the possibility of developing recommender systems, chatbots, or algorithmic screens for procurement monitoring and oversight. Distributed Ledger Technology (DLT) systems (aka blockchain) can manage data, but cannot verify its content, accuracy, or reliability. Internet of Things (IoT) applications and software oracles can automatically capture data, which can alleviate some of the difficulties in generating an adequate data infrastructure. But this is only in relation with the observation of the ‘real world’ or in relation to digitally available information, which quality raises the same issues as other sources of data. In short, all digital technologies are data-centric or, more clearly, data-dependent.

Given the crucial relevance of data across digital technologies, it is hard to emphasise how any shortcomings in the enabling data architecture curtail the likelihood of successful adoption of digital technologies for procurement governance. With inadequate data, it may simply be impossible to develop digital solutions at all. And the development and adoption of digital solutions developed on poor or inadequate data can generate further problems—eg skewing decision-making on the basis of inadequately derived ‘data insights’. Ultimately, then, ensuring that adequate data is available to develop digital governance solutions is a challenging but unavoidable requirement in the process of procurement digitalisation. Success, or lack of it, in the creation of an enabling data architecture will determine the viability of the deployment of digital technologies more generally. From this perspective, the implementation of eForms gains clear strategic importance.

eForms Implementation: a flexible model

Implementing eForms is not an easy task. The migration towards eForms requires a complete redesign of information exchange mechanisms. eForms are designed around universal business language and involve the use of a much more structured information schema, compatible with the EU’s eProcurement Ontology, than the current TED forms. eForms are also meant to collect a larger amount of information than current TED forms, especially in relation to sub-units within a tender, such as lots, or in relation to framework agreements. eForms are meant to be flexible and regularly revised, in particular to add new fields to facilitate data capture in relation to specific EU-mandated requirements in procurement, such as in relation with the clean vehicles rules (with some changes already coming up, likely in November 2022).

From an informational point of view, the main constraint that remains despite the adoption of eForms is that their mandatory content is determined by existing obligations to report and publish tender-specific information under the current EU procurement rules, as well as to meet broader reporting requirements under international and EU law (eg the WTO GPA). This mandatory content is thus rather limited. Ultimately, eForms’ main concentration is on disseminating details of contract opportunities and capturing different aspects of decision-making by the contracting authorities. Given the process-orientedness and transactional focus of the procurement rules, most of the information to be mandatorily captured by the eForms concerns the scope and design of the tender procedure, some aspects concerning the award and formal implementation of the contract, as well as some minimal data points concerning its material outcome—primarily limited to the winning tender. As the Director-General of the Publications Office put it an eForms workshop yesterday, the new eForms will provide information on ‘who buys what, from whom and for what price’. While some of that information (especially in relation to the winning tender) will be reflective of broader market conditions, and while the accumulation of information across procurement procedures can progressively generate a broader view of (some of) the relevant markets, it is worth stressing that eForms are not designed as a tool of market intelligence.

Indeed, eForms do not capture the entirety of information generated by a procurement process and, as mentioned, their mandatory content is rather limited. eForms do include several voluntary or optional fields, and they could be adapted for some voluntary uses, such as in relation to detection of collusion in procurement, or in relation to the beneficial ownership of tenderers and subcontractors. Extensive use of voluntary fields and the development of additional fields and uses could contribute to generating data that enabled the deployment of digital technologies for the purposes of eg market intelligence, integrity checks, or other sorts of (policy-related) analysis. For example, there are voluntary fields in relation to green, social or innovation procurement, which could serve as the basis for data-driven insights into how to maximise the effects of such policy interventions. There are also voluntary fields concerning procurement challenges and disputes, which could facilitate a monitoring of eg areas requiring guidance or training. However, while the eForms are flexible, include voluntary fields, and the schema facilitates the development of additional fields, is it unclear that adequate incentives exist for adoption beyond their mandatory minimum content.

Implementation in two tiers

The fact that eForms are in part mandatory and in part voluntary will most likely result in two separate tiers of eForms implementation across the EU. Tier 1 will solely concern the collection and exchange of information mandated by EU law, that is the minimum mandatory eForm content. Tier 2 will concern the optional collection and exchange of a much larger volume of information concerning eg the entirety of tenders received, as well as qualitative information on eg specific policy goals embedded in a tender process. Of course, in the absence of coordination, a (large) degree of variation within Tier 2 can be expected. Tier 2 is potentially very important for (digital) procurement governance, but there is no guarantee that Member States will decide to implement eForms covering it.

One of the major obstacles to the broad adoption of a procurement data model so far, at least in the European Union, relates to the slow uptake of e-procurement (as discussed eg here). Without an underlying highly automated e-procurement system, the generation and capture of procurement data is a main challenge, as it is a labour-intensive process prone to input error. The entry into force of the eForms rules could serve as a further push for the completion of the transition to e-procurement—at least in relation to procurement covered by EU law (as below thresholds procurement is a voluntary potential use of eForms). However, it is also possible that low e-procurement uptake and generalised unsophisticated approaches to e-procurement (eg reduced automation) will limit the future functionality of eForms, with Member States that have so far lagged behind restricting the use of eForms to tier 1. Non life-cycle (automated) e-procurement systems may require manual inputs into the new eForms (or the databases from which they can draw information) and this implies that there is a direct cost to the implementation of each additional (voluntary) data field. Contracting authorities may not perceive the (potential) advantages of incurring those costs, or may more simply be constrained by their available budget. A collective action problem arises here, as the cost of adding more data to the eForms is to be shouldered by each public buyer, while the ensuing big data would potentially benefit everyone (especially as it will be published—although there are also possibilities to capture but not publish information that should be explored, at least to prevent excessive market transparency; but let’s park that issue for now) and perhaps in particular data re-users offering for pay added-value services.

In direct relation to this, and compounding the (dis)incentives problem, the possibility (or likelihood) of minimal implementation is compounded by the fact that, in many Member States, the operational adaptation to eForms does not directly concern public sector entities, but rather their service providers. e-procurement services providers compete for the provision of large volume, entirely standardised platform services, which are markets characterised by small operational margins. This creates incentives for a minimal adaptation of current e-sending systems and disincentives for the inclusion of added-value (data) services potentially unlikely to be used by public buyers. Some (or most) optional aspects of the eForm implementation will thus remain unused due to these market structure and dynamics, which does not clearly incentivise a race to the top (unless there is clear demand pull for it).

With some more nuance, it should be stressed that it is also possible that the adoption of eForms is uneven within a given jurisdiction where the voluntary character of parts of the eForm is kept (rather than made mandatory across the board through domestic legislation), with advanced procurement entities (eg central purchasing bodies, or large buyers) adopting tier 2 eForms, and (most) other public buyers limiting themselves to tier 1.

Ensuing data fragmentation

While this variety of approaches across the EU and within a Member State would not pose legal challenges, it would have a major effect on the utility of the eForms-generated data for the purposes of eg developing ML solutions, as the data would be fragmented, hardly representative of important aspects of procurement (markets), and could hardly be generalisable. The only consistent data would be that covered by tier 1 (ie mandatory and standardised implementation) and this would limit the potential use cases for the deployment of digital technologies—with some possibly limited to the procurement remit of the specific institutions with tier 2 implementations.

Relatedly, it should be stressed that, despite the effort to harmonise the underlying data architecture and link it to the Procurement Ontology, the Implementation Handbook makes clear that ‘eForms are not an “off the shelf” product that can be implemented only by IT developers. Instead, before developers start working, procurement policy decision-makers have to make a wide range of policy decisions on how eForms should be implemented’ in the different Member States.

This poses an additional challenge from the perspective of data quality (and consistency), as there are many fields to be tailored in the eForms implementation process that can result in significant discrepancies in the underlying understanding or methodology to determine them, in addition to the risk of potential further divergence stemming from the domestic interpretation of very similar requirements. This simply extends to the digital data world the current situation, eg in relation to diverging understandings of what is ‘recyclable’ or what is ‘social value’ and how to measure them. Whenever open-ended concepts are used, the data may be a poor source for comparative and aggregate analysis. Where there are other sources of standardisation or methodology, this issue may be minimised—eg in relation to the green public procurement criteria developed in the EU, if they are properly used. However, where there are no outside or additional sources of harmonisation, it seems that there is scope for quite a few difficult issues in trying to develop digital solutions on top of eForms data, except in relation to quantitative issues or in relation to information structured in clearly defined categories—which will mainly link back to the design of the procurement.

An opportunity about to be lost?

Overall, while the implementation of eForms could in theory build a big data architecture and facilitate the development of ML solutions, there are many challenges ahead and the generalised adoption of tier 2 eForms implementations seems unlikely, unless Member States make a positive decision in the process of national adoption. The importance of an ambitious tier 2 implementation of eForms should be assessed in light of its downstream importance for the potential deployment of digital technologies to extract data-driven insights and to automate parts of the procurement process. A minimalistic implementation of eForms would significantly constrain future possibilities of procurement digitalisation. Primarily in the specific jurisdiction, but also with spillover effects across the EU.

Therefore, a minimalistic eForms implementation approach would perpetuate (most of the) data deficit that prevents effective procurement digitalisation. It would be a short-sighted saving. Moreover, the effects of a ‘middle of the road’ approach should also be considered. A minimalistic implementation with a view to a more ambitious extension down the line could have short-term gains, but would delay the possibility of deploying digital technologies because the gains resulting from the data architecture are not immediate. In most cases, it will be necessary to wait for the accumulation of sufficiently big data. In some cases of infrequent procurement, missing data points will generate further time lags in the extraction of valuable insights. It is no exaggeration that every data point not captured carries an opportunity cost.

If Member States are serious about the digitalisation of public procurement, they will make the most of the coming year to develop tier 2 eForms implementations in their jurisdiction. They should also keep an eye on cross-border coordination. And the European Commission, both DG GROW and the Publications Office, would do well to put as much pressure on Member States as possible.

Digital technologies, hype, and public sector capability

© Martin Brandt / Flickr.

By Albert Sanchez-Graells (@How2CrackANut) and Michael Lewis (@OpsProf).*

The public sector’s reaction to digital technologies and the associated regulatory and governance challenges is difficult to map, but there are some general trends that seem worrisome. In this blog post, we reflect on the problematic compound effects of technology hype cycles and diminished public sector digital technology capability, paying particular attention to their impact on public procurement.

Digital technologies, smoke, and mirrors

There is a generalised over-optimism about the potential of digital technologies, as well as their likely impact on economic growth and international competitiveness. There is also a rush to ‘look digitally advanced’ eg through the formulation of ‘AI strategies’ that are unlikely to generate significant practical impacts (more on that below). However, there seems to be a big (and growing?) gap between what countries report (or pretend) to be doing (eg in reports to the OECD AI observatory, or in relation to any other AI readiness ranking) and what they are practically doing. A relatively recent analysis showed that European countries (including the UK) underperform particularly in relation to strategic aspects that require detailed work (see graph). In other words, there are very few countries ready to move past signalling a willingness to jump onto the digital tech bandwagon.

Some of that over-optimism stems from limited public sector capability to understand the technologies themselves (as well as their implications), which leads to naïve or captured approaches to policymaking (on capture, see the eye-watering account emerging from the #Uberfiles). Given the closer alignment (or political meddling?) of policymakers with eg research funding programmes, including but not limited to academic institutions, naïve or captured approaches impact other areas of ‘support’ for the development of digital technologies. This also trickles down to procurement, as the ‘purchasing’ of digital technologies with public money is seen as a (not very subtle) way of subsidising their development (nb. there are many proponents of that approach, such as Mazzucato, as discussed here). However, this can also generate further space for capture, as the same lack of capability that affects high(er) level policymaking also affects funding organisations and ‘street level’ procurement teams. This results in a situation where procurement best practices such as market engagement result in the ‘art of the possible’ being determined by private industry. There is rarely co-creation of solutions, but too often a capture of procurement expenditure by entrepreneurs.

Limited capability, difficult assessments, and dependency risk

Perhaps the universalist techno-utopian framing (cost savings and efficiency and economic growth and better health and new service offerings, etc.) means it is increasingly hard to distinguish the specific merits of different digitalisation options – and the commercial interests that actively hype them. It is also increasingly difficult to carry out effective impact assessments where the (overstressed) benefits are relatively narrow and short-termist, while the downsides of technological adoption are diffuse and likely to only emerge after a significant time lag. Ironically, this limited ability to diagnose ‘relative’ risks and rewards is further exacerbated by the diminishing technical capability of the state: a negative mirror to Amazon’s flywheel model for amplifying capability. Indeed, as stressed by Bharosa (2022): “The perceptions of benefits and risks can be blurred by the information asymmetry between the public agencies and GovTech providers. In the case of GovTech solutions using new technologies like AI, Blockchain and IoT, the principal-agent problem can surface”.

As Colington (2021) points out, despite the “innumerable papers in organisation and management studies” on digitalisation, there is much less understanding of how interests of the digital economy might “reconfigure” public sector capacity. In studying Denmark’s policy of public sector digitalisation – which had the explicit intent of stimulating nascent digital technology industries – she observes the loss of the very capabilities necessary “for welfare states to develop competences for adapting and learning”. In the UK, where it might be argued there have been attempts, such as the Government Digital Services (GDS) and NHS Digital, to cultivate some digital skills ‘in-house’, the enduring legacy has been more limited in the face of endless demands for ‘cost saving’. Kattel and Takala (2021) for example studied GDS and noted that, despite early successes, they faced the challenge of continual (re)legitimization and squeezed investment; especially given the persistent cross-subsidised ‘land grab’ of platforms, like Amazon and Google, that offer ‘lower cost and higher quality’ services to governments. The early evidence emerging from the pilot algorithmic transparency standard seems to confirm this trend of (over)reliance on external providers, including Big Tech providers such as Microsoft (see here).

This is reflective of Milward and Provan’s (2003) ‘hollow state’ metaphor, used to describe "the nature of the devolution of power and decentralization of services from central government to subnational government and, by extension, to third parties – nonprofit agencies and private firms – who increasingly manage programs in the name of the state.” Two decades after its formulation, the metaphor is all the more applicable, as the hollowing out of the State is arguably a few orders of magnitude larger due the techno-centricity of reforms in the race towards a new model of digital public governance. It seems as if the role of the State is currently understood as being limited to that of enabler (and funder) of public governance reforms, not solely implemented, but driven by third parties—and primarily highly concentrated digital tech giants; so that “some GovTech providers can become the next Big Tech providers that could further exploit the limited technical knowledge available at public agencies [and] this dependency risk can become even more significant once modern GovTech solutions replace older government components” (Bharosa, 2022). This is a worrying trend, as once dominance is established, the expected anticompetitive effects of any market can be further multiplied and propagated in a setting of low public sector capability that fuels risk aversion, where the adage “Nobody ever gets fired for buying IBM” has been around since the 70s with limited variation (as to the tech platform it is ‘safe to engage’).

Ultimately, the more the State takes a back seat, the more its ability to steer developments fades away. The rise of a GovTech industry seeking to support governments in their digital transformation generates “concerns that GovTech solutions are a Trojan horse, exploiting the lack of technical knowledge at public agencies and shifting decision-making power from public agencies to market parties, thereby undermining digital sovereignty and public values” (Bharosa, 2022). Therefore, continuing to simply allow experimentation in the GovTech market without a clear strategy on how to reign the industry in—and, relatedly, how to build the public sector capacity needed to do so as a precondition—is a strategy with (exponentially) increasing reversal costs and an unclear tipping point past which meaningful change may simply not be possible.

Public sector and hype cycle

Being more pragmatic, the widely cited, if impressionistic, “hype cycle model” developed by Gartner Inc. provides additional insights. The model presents a generalized expectations path that new technologies follow over time, which suggests that new industrial technologies progress through different stages up to a peak that is followed by disappointment and, later, a recovery of expectations.

Although intended to describe aggregate technology level dynamics, it can be useful to consider the hype cycle for public digital technologies. In the early phases of the curve, vendors and potential users are actively looking for ways to create value from new technology and will claim endless potential use cases. If these are subsequently piloted or demonstrated – even if ‘free’ – they are exciting and visible, and vendors are keen to share use cases, they contribute to creating hype. Limited public sector capacity can also underpin excitement for use cases that are so far removed from their likely practical implementation, or so heavily curated, that they do not provide an accurate representation of how the technology would operate at production phase in the generally messy settings of public sector activity and public sector delivery. In phases such as the peak of inflated expectations, only organisations with sufficient digital technology and commercial capabilities can see through sophisticated marketing and sales efforts to separate the hype from the true potential of immature technologies. The emperor is likely to be naked, but who’s to say?

Moreover, as mentioned above, international organisations one step (upwards) removed from the State create additional fuel for the hype through mapping exercises and rankings, which generate a vicious circle of “public sector FOMO” as entrepreneurial bureaucrats and politicians are unlikely to want to be listed bottom of the table and can thus be particularly receptive to hyped pitches. This can leverage incentives to support *almost any* sort of tech pilots and implementations just to be seen to do something ‘innovative’, or to rush through high-risk implementations seeking to ‘cash in’ on the political and other rents they can (be spun to) generate.

However, as emerging evidence shows (AI Watch, 2022), there is a big attrition rate between announced and piloted adoptions, and those that are ultimately embedded in the functioning of the public sector in a value-adding manner (ie those that reach the plateau of productivity stage in the cycle). Crucially, the AI literacy and skills in the staff involved in the use of the technology post-pilot are one of the critical challenges to the AI implementation phase in the EU public sector (AI Watch, 2021). Thus, early moves in the hype curve are unlikely to translate into sustainable and expectations-matching deployments in the absence of a significant boost of public sector digital technology capabilities. Without committed long-term investment in that capability, piloting and experimentation will rarely translate into anything but expensive pet projects (and lucrative contracts).

Locking the hype in: IP, data, and acquisitions markets

Relatedly, the lack of public sector capacity is a foundation for eg policy recommendations seeking to avoid the public buyer acquiring (and having to manage) IP rights over the digital technologies it funds through procurement of innovation (see eg the European Commission’s policy approach: “There is also a need to improve the conditions for companies to protect and use IP in public procurement with a view to stimulating innovation and boosting the economy. Member States should consider leaving IP ownership to the contractors where appropriate, unless there are overriding public interests at stake or incompatible open licensing strategies in place” at 10).

This is clear as mud (eg what does overriding public interest mean here?) but fails to establish an adequate balance between public funding and public access to the technology, as well as generating (unavoidable?) risks of lock-in and exacerbating issues of lack of capacity in the medium and long-term. Not only in terms of re-procuring the technology (see related discussion here), but also in terms of the broader impact this can have if the technology is propagated to the private sector as a result of or in relation to public sector adoption.

Linking this recommendation to the hype curve, such an approach to relying on proprietary tech with all rights reserved to the third-party developer means that first mover advantages secured by private firms at the early stages of the emergence of a new technology are likely to be very profitable in the long term. This creates further incentives for hype and for investment in being the first to capture decision-makers, which results in an overexposure of policymakers and politicians to tech entrepreneurs pushing hard for (too early) adoption of technologies.

The exact same dynamic emerges in relation to access to data held by public sector entities without which GovTech (and other types of) innovation cannot take place. The value of data is still to be properly understood, as are the mechanisms that can ensure that the public sector obtains and retains the value that data uses can generate. Schemes to eg obtain value options through shares in companies seeking to monetise patient data are not bullet-proof, as some NHS Trusts recently found out (see here, and here paywalled). Contractual regulation of data access, data ownership and data retention rights and obligations pose a significant challenge to institutions with limited digital technology capabilities and can compound IP-related lock-in problems.

A final further complication is that the market for acquisitions of GovTech and other digital technologies start-ups and scale-ups is very active and unpredictable. Even with standard levels of due diligence, public sector institutions that had carefully sought to foster a diverse innovation ecosystem and to avoid contracting (solely) with big players may end up in their hands anyway, once their selected provider leverages their public sector success to deliver an ‘exit strategy’ for their founders and other (venture capital) investors. Change of control clauses clearly have a role to play, but the outside alternatives for public sector institutions engulfed in this process of market consolidation can be limited and difficult to assess, and particularly challenging for organisations with limited digital technology and associated commercial capabilities.

Procurement at the sharp end

Going back to the ongoing difficulty (and unwillingness?) in regulating some digital technologies, there is a (dominant) general narrative that imposes a ‘balanced’ approach between ensuring adequate safeguards and not stifling innovation (with some countries clearly erring much more on the side of caution, such as the UK, than others, such as the EU with the proposed EU AI Act, although the scope of application of its regulatory requirements is narrower than it may seem). This increasingly means that the tall order task of imposing regulatory constraints on the digital technologies and the private sector companies that develop (and own them) is passed on to procurement teams, as the procurement function is seen as a useful regulatory mechanism (see eg Select Committee on Public Standards, Ada Lovelace Institute, Coglianese and Lampmann (2021), Ben Dor and Coglianese (2022), etc but also the approach favoured by the European Commission through the standard clauses for the procurement of AI).

However, this approach completely ignores issues of (lack of) readiness and capability that indicate that the procurement function is being set up to fail in this gatekeeping role (in the absence of massive investment in upskilling). Not only because it lacks the (technical) ability to figure out the relevant checks and balances, and because the levels of required due diligence far exceed standard practices in more mature markets and lower risk procurements, but also because the procurement function can be at the sharp end of the hype cycle and (pragmatically) unable to stop the implementation of technological deployments that are either wasteful or problematic from a governance perspective, as public buyers are rarely in a position of independent decision-making that could enable them to do so. Institutional dynamics can be difficult to navigate even with good insights into problematic decisions, and can be intractable in a context of low capability to understand potential problems and push back against naïve or captured decisions to procure specific technologies and/or from specific providers.

Final thoughts

So, as a generalisation, lack of public sector capability seems to be skewing high level policy and limiting the development of effective plans to roll it out, filtering through to incentive systems that will have major repercussions on what technologies are developed and procured, with risks of lock-in and centralisation of power (away from the public sector), as well as generating a false comfort in the ability of the public procurement function to provide an effective route to tech regulation. The answer to these problems is both evident, simple, and politically intractable in view of the permeating hype around new technologies: more investment in capacity building across the public sector.

This regulatory answer is further complicated by the difficulty in implementing it in an employment market where the public sector, its reward schemes and social esteem are dwarfed by the high salaries, flexible work conditions and allure of the (Big) Tech sector and the GovTech start-up scene. Some strategies aimed at alleviating the generalised lack of public sector capability, e.g. through a GovTech platform at the EU level, can generate further risks of reduction of (in-house) public sector capability at State (and regional, local) level as well as bottlenecks in the access of tech to the public sector that could magnify issues of market dominance, lock-in and over-reliance on GovTech providers (as discussed in Hoekstra et al, 2022).

Ultimately, it is imperative to build more digital technology capability in the public sector, and to recognise that there are no quick (or cheap) fixes to do so. Otherwise, much like with climate change, despite the existence of clear interventions that can mitigate the problem, the hollowing out of the State and the increasing overdependency on Big Tech providers will be a self-fulfilling prophecy for which governments will have no one to blame but themselves.

 ___________________________________

* We are grateful to Rob Knott (@Procure4Health) for comments on an earlier draft. Any remaining errors and all opinions are solely ours.

Flexibility, discretion and corruption in procurement: an unavoidable trade-off undermining digital oversight?

Magic; Stage Illusions and Scientific Diversions, Including Trick Photography (1897), written by Albert Allis Hopkins and Henry Ridgely Evan.

As the dust settles in the process of reform of UK public procurement rules, and while we await for draft legislation to be published (some time this year?), there is now a chance to further reflect on the likely effects of the deregulatory, flexibility- and discretion-based approach to be embedded in the new UK procurement system.

An issue that may not have been sufficiently highlighted, but which should be of concern, is the way in which increased flexibility and discretion will unavoidably carry higher corruption risks and reduce the effectiveness of potential anti-corruption tools, in particular those based on the implementation of digital technologies for procurement oversight [see A Sanchez-Graells, ‘Procurement Corruption and Artificial Intelligence: Between the Potential of Enabling Data Architectures and the Constraints of Due Process Requirements’ in S Williams-Elegbe & J Tillipman (eds), Routledge Handbook of Public Procurement Corruption (Routledge, forthcoming)].

This is an inescapable issue, for there is an unavoidable trade-off between flexibility, discretion and corruption (in procurement, and more generally). And this does not bode well for the future of UK procurement integrity if the experience during the pandemic is a good predictor.

The trade-off between flexibility, discretion and corruption underpins many features of procurement regulation, such as the traditional distrust of procedures involving negotiations or direct awards, which may however stifle procurement innovation and limit value for money [see eg F Decarolis et al, ‘Rules, Discretion, and Corruption in Procurement: Evidence from Italian Government Contracting’ (2021) NBER Working Paper 28209].

The trade-off also underpins many of the anti-corruption tools (eg red flags) that use discretionary elements in procurement practice as a potential proxy for corruption risk [see eg M Fazekas, L Cingolani and B Tóth, ‘Innovations in Objectively Measuring Corruption in Public Procurement’ in H K Anheier, M Haber and M A Kayser (eds) Governance Indicators: Approaches, Progress, Promise (OUP 2018) 154-180; or M Fazekas, S Nishchal and T Søreide, ‘Public procurement under and after emergencies’ in O Bandiera, E Bosio and G Spagnolo (eds), Procurement in Focus – Rules, Discretion, and Emergencies (CEPR Press 2022) 33-42].

Moreover, economists and political scientists have clearly stressed that one way of trying to strike an adequate balance between the exercise of discretion and corruption risks, without disproportionately deterring the exercise of judgement or fostering laziness or incompetence in procurement administration, is to increase oversight and monitoring, especially through auditing mechanisms based on open data (see eg Procurement in a crisis: how to mitigate the risk of corruption, collusion, abuse and incompetence).

The difficulty here is that the trade-off is inescapable and the more dimensions on which there is flexibility and discretion in a procurement system, the more difficult it will be to establish a ‘normalcy benchmark’ or ‘integrity benchmark’ from which deviations can trigger close inspection. Taking into account that there is a clear trend towards seeking to automate integrity checks on the basis of big data and machine learning techniques, this is a particularly crucial issue. In my view, there are two main sources of difficulties and limitations.

First, that discretion is impossible to code for [see S Bratus and A Shubina, Computerization, Discretion, Freedom (2015)]. This both means that discretionary decisions cannot be automated, and that it is impossible to embed compliance mechanisms (eg through the definition of clear pathways based on business process modelling within an e-procurement system, or even in blockchain and smart contract approaches: Neural blockchain technology for a new anticorruption token: towards a novel governance model) where there is the possibility of a ‘discretion override’.

The more points along the procurement process where discretion can be exercised (eg choice of procedure, design of procedure, award criteria including weakening of link to subject matter of the contract and inclusion of non(easily)measurable criteria eg on social value, displacement of advantage analysis beyond sphere of influence of contracting authority, etc) the more this difficulty matters.

Second, the more deviations there are between the new rulebook and the older one, the lower the value of existing (big) data (if any is available or useable) and of any indicators of corruption risk, as the regulatory confines of the exercise of discretion will not only have shifted, but perhaps even lead to a displacement of corruption-related exercise of discretion. For example, focusing on the choice of procedure, data on the extent to which direct awards could be a proxy for corruption may be useless in a new context where that type of corruption can morph into ‘custom-made’ design of a competitive flexible procedure—which will be both much more difficult to spot, analyse and prove.

Moreover, given the inherent fluidity of that procedure (even if there is to be a template, which is however not meant to be uncritically implemented), it will take time to build up enough data to be able to single out specific characteristics of the procedure (eg carrying out negotiations with different bidders in different ways, such as sequentially or in parallel, with or without time limits, the inclusion of any specific award criterion, etc) that can be indicative of corruption risk reliably. And that intelligence may not be forthcoming if, as feared, the level of complexity that comes with the exercise of discretion deters most contracting authorities from exercising it, which would mean that only a small number of complex procedures would be carried out every year, potentially hindering the accumulation of data capable of supporting big data analysis (or even meaningful econometrical treatment).

Overall, then, the issue I would highlight again is that there is an unavoidable trade-off between increasing flexibility and discretion, and corruption risk. And this trade-off will jeopardise automation and data-based approaches to procurement monitoring and oversight. This will be particularly relevant in the context of the design and implementation of the tools at the disposal of the proposed Procurement Review Unit (PRU). The Response to the public consultation on the Transforming Public Procurement green paper emphasised that

‘… the PRU’s main focus will be on addressing systemic or institutional breaches of the procurement regulations (i.e. breaches common across contracting authorities or regularly being made by a particular contracting authority). To deliver this service, it will primarily act on the basis of referrals from other government departments or data available from the new digital platform and will have the power to make formal recommendations aimed at addressing these unlawful breaches’ (para [48]).

Given the issues raised above, and in particular the difficulty or impossibility of automating the analysis of such data, as well as the limited indicative value and/or difficulty of creating reliable red flags in a context of heightened flexibility and discretion, quite how effective this will be is difficult to tell.

Moreover, given the floating uncertainty on what will be identified as suspicious of corruption (or legal infringement), it is also possible that the PRU (initially) operates on the basis of indicators or thresholds arbitrarily determined (much like the European Commission has traditionally arbitrarily set thresholds to consider procurement practices problematic under the Single Market Scorecard; see eg here). This could have a signalling effect that could influence decision-making at contracting authority level (eg to avoid triggering those red flags) in a way that pre-empts, limits or distorts the exercise of discretion—or that further displaces corruption-related exercise of discretion to areas not caught by the arbitrary indicators or thresholds, thus making it more difficult to detect.

Therefore, these issues can be particularly relevant in establishing both whether the balance between discretion and corruption risk is right under the new rulebook’s regulatory architecture and approach, as well as whether there are non-statutory determinants of the (lack of) exercise of discretion, other than the complexity and potential litigation and challenge risk already stressed in earlier analysis and reflections on the green paper.

Another ‘interesting’ area of development of UK procurement law and practice post-Brexit when/if it materialises.

New paper on procurement corruption and AI

I have just uploaded a new paper on SSRN: ‘Procurement corruption and artificial intelligence: between the potential of enabling data architectures and the constraints of due process requirements’, to be published in S. Williams-Elegbe & J. Tillipman (eds), Routledge Handbook of Public Procurement Corruption (forthcoming). In this paper, I reflect on the potential improvements that using AI for anti-corruption purposes can practically have in the current (and foreseeable) context of AI development, (lack of) procurement and other data, and existing due process constraints on the automation or AI-support of corruption-related procurement decision-making (such as eg debarment/exclusion or the imposition of fines). The abstract is as follows:

This contribution argues that the expectations around the deployment of AI as an anti-corruption tool in procurement need to be tamed. It explores how the potential applications of AI replicate anti-corruption interventions by human officials and, as such, can only provide incremental improvements but not a significant transformation of anti-corruption oversight and enforcement architectures. It also stresses the constraints resulting from existing procurement data and the difficulties in creating better, unbiased datasets and algorithms in the future, which would also generate their own corruption risks. The contribution complements this technology-centred analysis with a critical assessment of the legal constraints based on due process rights applicable even when AI supports continued human intervention. This in turn requires a close consideration of the AI-human interaction, as well as a continuation of the controls applicable to human decision-making in corruption-prone activities. The contribution concludes, first, that prioritising improvements in procurement data capture, curation and interconnection is a necessary but insufficient step; and second, that investments in AI-based anti-corruption tools cannot substitute, but only complement, current anti-corruption approaches to procurement.

As always, feedback more than welcome. Not least, because I somehow managed to write this ahead of the submission deadline, so I would have time to adjust things ahead of publication. Thanks in advance: a.sanchez-graells@bristol.ac.uk.

Is the ESPD the enemy of procurement automation in the EU (quick thoughts)

I have started to watch the three-session series on Intelligent Automation in US Federal Procurement hosted by the GW Law Government Procurement Law Program over the last few weeks (worth watching!), as part of my research for a paper on AI and corruption in procurement. The first session in the series focuses in large part on the intelligent automation of information gathering for the purposes of what in the EU context are the processes of exclusion and qualitative selection of economic providers. And this got me thinking about how it would (or not) be possible to replicate some of the projects in an EU jurisdiction (or even at EU-wide level).

And, once again, the issue of the lack of data on which to train algorithms, as well as the lack of representative/comprehensive databases from which to automatically extract information came up. But somehow it seems like the ESPD and the underlying regulatory approach may be making things more difficult.

In the EU, automating mandatory exclusion (not necessarily to have AI adopt decisions, but to have it prepare reports capable of supporting independent decision-making by contracting authorities) would primarily be a matter of checking against databases of prior criminal convictions, which is not only difficult to do due to the absence of structured databases themselves, but also due to the diversity of legal regimes and the languages involved, as well as the pervasive problem of beneficial ownership and (dis)continuity in corporate personality.

Similarly, for discretionary exclusion, automation would primarily be based on retrieving information concerning grounds not easily or routinely captured in existing databases (eg conflicts of interest), as well as limited by increasingly constraining CJEU case law demanding case-by-case assessments by the contracting authority in ways that diminish the advantages of automating eg red flags based on decisions taken by a different contracting authority (or centralised authority).

Finally, automating qualitative selection would be almost impossible, as it is currently mostly based on the self-certification implicit in the ESPD. Here, the 2014 Public Procurement Directives tried to achieve administrative simplification not through the once only principle (which would be useful in creating databases supporting automatisation of some parts of the project, but on which a 2017 project does not seem to have provided many advances), but rather through the ‘tell us only if successful’ (or suspected) principle. This naturally diminishes the amount of information the public buyer (and the broader public sector) holds, with repeat tenderers being completely invisible for the purposes of automation so long as they are not awarded contracts.

All of this leads me to think that there is a big blind spot in the current EU approach to open procurement data as the solution/enabler of automatisation in the context of EU public procurement practice. In fact, most of the crucial (back office) functions — and especially those relating to probity and quality screenings relating to tenderers — will not be susceptible of automation until (or rather unless) different databases are created and advanced mechanisms of interconnection of national databases are created at EU level. And creating those databases will be difficult (or simply not happen in practice) for as long as the ESPD is in place, unless a parallel system of registration (based on the once only principle) is developed for the purposes of registering onto and using eProcurement platforms (which seems to also raise some issues).

So, all in all, it would seem that more than ever we need to concentrate on the baby step of creating a suitable data architecture if we want to reap the benefits of AI (and robotic process automation in particular) any time soon. As other jurisdictions are starting to move (or crawl, to keep with the metaphor), we should not be wasting our time.

Some thoughts on the Commission's 2021 Report on 'Implementation and best practices of national procurement policies in the Internal Market'

33238786892_9918381c16_c.jpg

In May 2021, the European Commission published its report on the ‘Implementation and best practices of national procurement policies in the Internal Market’ (the ‘2021 report’). The 2021 report aggregates the national reports sent by Member States in discharge of specific reporting obligations contained in the 2014 Public Procurement Package and offers some insight into the teething issues resulting from its transposition—which may well have become structural issues. In this post, I offer some thoughts on the contents of the 2021 report.

Better late than never?

Before getting to the details of the 2021 report, the first thing to note is the very significant delay in the publication of this information and analysis, as the 2021 report refers to the implementation and practice of procurement covered by the Directives in 2017. The original national reports seem to have been submitted by the Member States (plus Norway, minus Austria for some unexplained reason) in 2018.

Given the limited analysis conducted in the 2021 report, one can wonder why it took the Commission so long. There may be some explanation in the excuses recently put forward to the European Parliament for the continued delay (almost 2 and a half years, and counting) in reporting on the economic effect of the 2014 rules, although that is less than persuasive. Moreover, given that the reporting obligation incumbent on the Member States is triggered every three years, in 2021 we should be having fresh data and analysis of the national reports covering the period 2018-2020 … Oh well, let’s work with what we have.

A missing data (stewardship) nightmare

The 2021 report provides painful evidence of the lack of reliable procurement data in 2017. Nothing new there, sadly—although the detail of the data inconsistencies, including Member States reporting ‘above threshold procurement’ data that differs from what can be extracted from TED (page 4), really should raise a few red flags and prompt a few follow-up questions from the Commission … the open-ended commitment to further investigation (page 4) sounding as too little, too late.

The main issue, though, is that this problem is unlikely to have been solved yet. While there is some promise in the forthcoming implementation of new eForms (to start being used between Nov 2022 and no later than Oct 2023), the broader problem of ensuring uniformity of data collection and (more) timely reporting is likely to remain. It is also surprising to see that the Commission considers that the collection of ‘above threshold’ procurement data is voluntary for Member States (fn 5), when Art 85(1) places them under an obligation to provide ‘missing statistical information’ where it cannot be extracted from (TED) notices.

So, from a governance perspective (and leaving aside the soft, or less, push towards the implementation of OCDS standards in different Member States), it seems that the Commission and the Member States are both happy to just keeping shrugging their shoulders at each other when it comes to the incompleteness and low quality of procurement data. May it be time for the Commission to start enforcing reporting obligations seriously and with adequate follow-ups? Or should we wait to the (2024?) second edition of the implementation report to decide to do something then — although it will then be quite tempting to say that we need to wait and see what effect the (delayed?) adoption of the eForms generates. So maybe in light of the (2027?) third edition of the report?

Lack of capability, and ‘Most frequent sources of wrong application or of legal uncertainty’

The 2021 report includes a section on the reported most frequent sources of incorrect application of the 2014 rules, or perceived areas of legal uncertainty. This section, however, starts with a list of issues that rather point to a shortfall of capabilities in the procurement workforce in (some?) Member States. Again, while the Commission’s work on procurement professionalisation may have slightly changed the picture, this is primarily a matter for Member State investment. And in the current circumstances, it seems difficult to see how the post-pandemic economic recovery funds that are being channeled through procurement can be effectively spent where there are such staffing issues.

The rest of the section includes some selected issues posing concrete interpretation or practical implementation difficulties, such as the calculation of threshold values, the rules on exclusion and the rules on award criteria. While these are areas that will always generate some practical challenges, these are not the areas where the 2014 Package generated most change (certainly not on thresholds) and the 2021 report then seems to keep raising structural issues. The same can be said of the generalised preference for the use of lowest price, the absence of market research and engagement, the imposition of unrealistically short tendering deadlines implicit in rushed procurement, or the arbitrary use of selection criteria.

All of this does not bode well for the ‘strategic use’ of procurement (more below) and it seems like the flexibility and potential for process-based innovation of the 2014 rules (as was that of the 2004 rules?) are likely to remain largely unused, thus triggering poor procurement practices later to fuel further claims for flexibilisation and simplification in the next round of revision. On that note, I cannot refrain from pointing to the UK’s recent green paper on the ‘Transformation of Public Procurement’ as a clear example of the persistence of some procurement myths that remain in the collective imagery despite a lack of engagement with recent legislative changes aimed at debunking them (see here, here, and here for more analysis).

Fraud, corruption, conflict of interest and serious irregularities

The 2021 report then has a section that would seem rather positive and incapable of controversy at first sight, as it presents (laudable) efforts at Member State level to create robust anti-fraud and anti-corruption institutions, as well as implementations of rules on conflict of interest that exceed the EU minimum standard, and the development of sophisticated approaches to the prevention and detection of collusion in procurement. Two comments come to mind here.

The first one is that the treatment of conflicts of interest in the Directive clearly requires the development of further rules at domestic level and that the main issue is not whether the statutes contain suitable definitions, but whether conflicts of interest are effectively screened and (more importantly), reacted to. In that regard, it would be interesting to know, for example, how many decisions finding a non-solvable conflict of interest have led to the exclusion of tenderers at Member State level since the new rules came into force. If anyone wanted to venture an estimate, I would not expect it to be in the 1000s.

The second comment is that the picture that the 2021 report paints about the (2017) development of anti-collusion approaches at Member State level (page 7) puts a large question mark on the need for the recent Notice on tools to fight collusion in public procurement and on guidance on how to apply the related exclusion ground (see comments here). If the Member States were already taking action, why did the (contemporaneous) 2017 Communication on ‘Making public procurement work in and for Europe’ (see here) include a commitment to ‘… develop tools and initiatives addressing this issue and raising awareness to minimise the risks of collusive behaviours on procurement markets. This will include actions to improve the market knowledge of contracting authorities, support to contracting authorities careful planning and design of procurement processes and better cooperation and exchange of information between public procurement and competition authorities. The Commission will also prepare guidelines on the application of the new EU procurement directives on exclusion grounds on collusion.’ Is the Commission perhaps failing to recognise that the 2014 rules, and in particular the new exclusion ground for contemporaneous collusion, created legal uncertainty and complicated the practical application of the emerging domestic practices?

Moreover, the 2021 report includes a relatively secondary comment that the national reports ‘show that developing and applying means for the quantitative assessment of collusion risks in award procedures, mostly in the form of risk indicators, remains a challenge’. This is a big understatement and the absence of (publicly-known?) work by the Commission itself on the development of algorithmic screening for collusion detection purposes can only be explained away by the insufficiency of the existing data (which killed off eg a recent effort in the UK), which brings us back to the importance of stronger data stewardship if some of the structural issues are to be resolved (or started to be resolved) any time soon.

SMEs

There is also little about SME access to procurement in the 2021 report, mainly due to limited data provided in the national reports (so, again, another justification for a tougher approach to data collection and reporting). However, there are a couple of interesting qualitative issues. The first one is that ‘only a limited number of Member States have explicitly mentioned challenges encountered by SMEs in public procurement’ (page 7), which raises some questions about the extent to which SME-centric policy issues rank equally high at EU and at national level (which can be relevant in terms of assessing e.g. the also very recent Report on SME needs in public procurement (Feb 2021, but published July 2021). The second one is that the few national strategies seeking to boost SME participation in procurement concern programmes aimed at increasing interactions between SMEs and contracting authorities at policy and practice design level, as well as training for SMEs. What those programmes have in common is that they require capability and resources to be dedicated to the SME procurement policy. Given the shortcomings evidenced in the 2021 report (above), it should be no wonder that most Member States do not have the resources to afford them.

Green, social & Innovation | ‘strategic procurement’

Not too dissimilarly, the section on the uptake of ‘strategic procurement’ also points at difficulties derived from limited capability or understanding of these issues amongst public buyers, as well as the perception (at least for green procurement) that it can be detrimental to SME participation. There is also repeated reference to lack of clarity of the rules and risks of litigation — both of which are in the end dependent on procurement capability, at least to a large extent.

All of this is particularly important, not only because it reinforces the difficulties of conducting complex or sophisticated procurement procedures that exceed the capability (either in terms of skill or, probably more likely, available time) of the procurement workforce, but also because it once again places some big question marks on the feasibiity of implementing some of the tall asks derived from eg the new green procurement requirements that can be expected to follow from the European Green Deal.

Overal thoughts

All of this leads me to two, not in the least original or groundbreaking, thoughts. First, that procurement data is an enabler of policies and practices (clearly of those supported by digital technologies, but not only) which absence significantly hinders the effectiveness of the procurement function. Second, that there is a systemic and long-lasting underinvestment in procurement capability in (most) Member States — about which there is little the European Commission can do — which also significantly hinders the effectiveness of the procurement function.

So, if the current situation is to be changed, a bold and aggressive plan of investment in an enabling data architecture and legal-commercial (and technical) capability is necessary. Conversely, until (or unless) that happens, all plans to use procurement to prop up or reactivate the economy post-pandemic and, more importantly, to face the challenges of the climate emergency are likely to be of extremely limited practical relevance due to failures in their implementation. The 2021 report clearly supports aggressive action on both fronts (even if it refers to the situation in 2017, the problems are very much still current). Will it be taken?

Regulatory trends in public procurement from a competition lens -- 3 short, provocative presentations

I was asked to record three short (and provocative) presentations on some procurement regulatory trends seen from a competition lens. I thought this could be of some interest, so I am sharing them here. The three presentations and the three sets of slides should be available through the links below. Please email me (a.sanchez-graells@bristol.ac.uk) in case of any technical difficulty accessing them, or with any feedback. I hope to start some discussion through the comments section, so please feel free to participate!

1. Transparent procurement: some reflections on its inherent tensions

This short presentation reflects on the tensions between transparency and competition in procurement, with a particular focus on the heightened risks posed by the 'open contracting' movement. It advocates a more nuanced approach to the regulation of procurement transparency in the age of big data [slides].

2. Smart, streamlined procurement: too high hopes for procurement?

This presentation discusses some of the implications and risks resulting from recent regulatory trends in public procurement, from a competition perspective. It focuses on procurement centralisation and the use of procurement to deliver horizontal policies as two of the most salient regulatory trends. It stresses the need for more effective oversight of these more complex forms of procurement [slides].

3. Effective procurement oversight: what to look for & who should do it?

This presentation addresses some of the challenges in creating an effective procurement oversight system. It concentrates on the availability of high quality data, its access by relevant institutions and stakeholders, and the need for a joined up and collaborative approach where multiple entities have oversight powers/duties. It pays particular attention the need for collaboration between contracting authorities and competition authorities [slides].

How can we get data scientists excited about public procurement, and everyone excited about data?

This is a short reflection—or rather, a call for action—so procurement lawyers and economists start thinking a bit differently about procurement data in more exciting ways, and hopefully in a way that can excite data scientists too.

I have just read the paper by A Agrahari and SK Srivastava, ‘A Data Visualization Tool to Benchmark Government Tendering Process: Insights From Two Public Enterprises ‘ (2019) 26(3) Benchmarking 836-853 (abstract available at SSRN: https://ssrn.com/abstract=3451789, full paper is paywalled). The paper caught my eye because I have recently been thinking about the possibilities that could be unlocked by imminent increases in the volumes of procurement data publicly available, in particular as a result of the new EU rules on eForms (due to be transposed by end 2022) and the increased uptake of OCDS and similar open data initiatives—that is, once current efforts to create an enabling data architecture start to bear fruit.

Not to mince words, I was rather disappointed to read the paper and realise that there is no real data visualization tool (beyond really basic excel graphs) and that the only information the authors decided to use to support procurement management recommendations mainly concerns periods of time (for tender submission, tender evaluation, etc) as well as very limited formal aspects (such as how many different types of tender sureties were accepted). The most advanced insight in the paper concerns a trend analysis of expiry of tenders during evaluation (below), which is hardly ground-breaking (but still, of some internal use for the procuring entites in question and, perhaps, for external audit bodies, I guess).

Agrahari & Srivastava (2019: 848).

Agrahari & Srivastava (2019: 848).

The paper made me keep thinking, though. And the realisation I arrived at is that we need to get the data scientists excited about procurement because lawyers, public managers (and to a large extent, economists) will probably lack the imagination to put the vast volumes of procurement data about to be unlocked to its most effective use. I guess this goes back to general issues of inertia, group-think and professional acculturation.

To be sure, there are quite a few quick improvements to be introduced in a host of legal and economic analyses of the procurement function once reliable data is available (in particular in terms of monitoring and compliance, within the boundaries of Regtech and automated audits), but the more transformative uses of data will probably come from those that can understand the potential in the information (like the statute inside the marble block, I guess), rather than being bounded by the real-world / day-to-day problems that tend to skew our understanding of procurement.

When you look at other areas with an abundance of data, some of the data visualisations are truly amazing (a quick Google search will support this) and have the power to encapsulate the insights from unimaginable volumes of information into very powerful messages. One that keeps blowing my mind is the below visualization of New York City trees by Cloudred (it is interactive, so please go and browse the real thing).

Cloudred.

Cloudred.

It seems to me that this type of very visual analysis could (and should!) be applied to procurement in a myriad different forms, eg to compare expenditure per capita across different locations, to compare mix of expenditure across contracting authorities or departments, as well as to check a for a multitude of policy-relevant issues. For example, I imagine a similar interactive graph to represent the holy grail of cross-border procurement interactions in the EU single market, or in the context of free trade agreements … it should also be possible to use similar visualisations to identify different entities holding similar assets (which could then be pooled traded, etc) … and I am a rather unimaginative lawyer, so data scientists that got excited about the potential of this field could probably takes us rather far.

Now, the question is, how can we get data scientists excited about procurement?

Of course, we cannot just hope that they will one day discover the unexploited value of procurement data. And it may well be that we need to make an effort to help them understand what procurement is about and what ultimate and more immediate goals it aims to serve, so that they can start to imagine for us the relevant data expressions and the way to construct them. I guess that the first step is probably for us to get excited about data ourselves, so that we can make that contagious.

I think there is a data visualisation challenge to be launched and I would be really excited to be part of making it happen. I just need to find someone with a deep pocket willing to fund it. In the meantime, and more modestly, perhaps we can get a conversation going on what you would like procurement data to be used for and how you imagine it could be done (or even if you cannot imagine it).

I am curious to read about your suggestions in the comments section—or by email at a.sanchez-graells@bristol.ac.uk, if you think there is some app to developed that you would not want to give away for free.

Postscript: a couple of twitter updates

Two interesting sets of materials have been pointed to me on twitter:

Digital technologies, public procurement and sustainability: some exploratory thoughts

download.jpeg

** This post is based on the seminar given at the Law Department of Pompeu Fabra University in Barcelona, on 7 November 2019. The slides for the seminar are available here. Please note that some of the issues have been rearranged. I am thankful to participants for the interesting discussion, and to Dr Lela Mélon and Prof Carlos Gómez Ligüerre for the kind invitation to participate in this activitity of their research group on patrimonial law. I am also grateful to Karolis Granickas for comments on an earlier draft. The standard disclaimer applies.**

Digital technologies, public procurement and sustainability: some exploratory thoughts

1. Introductory detour

The use of public procurement as a tool to further sustainability goals is not a new topic, but rather the object of a long-running discussion embedded in the broader setting of the use of procurement for the pursuit of horizontal or secondary goals—currently labelled smart or strategic procurement. The instrumentalisation of procurement for (quasi)regulatory purposes gives rise to a number of issues, such as: regulatory transfer; the distortion of the very market mechanisms on which procurement rules rely as a result of added regulatory layers and constraints; legitimacy and accountability issues; complex regulatory impact assessments; professionalisation issues; etc.

Discussions in this field are heavily influenced by normative and policy positions, which are not always clearly spelled out but still drive most of the existing disagreement. My own view is that the use of procurement for horizontal policies is not per se desirable. The simple fact that public expenditure can act as a lever/incentive to affect private (market) behaviour does not mean that it should be used for that purpose at every opportunity and/or in an unconstrained manner. Procurement should not be used in lieu of legislation or administrative regulation where it is a second-best regulatory tool. Embedding regulatory elements that can also achieve horizontal goals in the procurement process should only take place where it has clear synergies with the main goal of procurement: the efficient satisfaction of public sector needs and/or needs in the public interest. This generates a spectrum of potential uses of procurement of a different degree of desirability.

At one end, and at its least desirable, procurement can and is used as a trade barrier for economic protectionism. In my view, this should not happen. At the other end of the spectrum, at its most desirable, procurement can and is (sometimes) used in a manner that supports environmental sustainability and technical innovation. In my view, this should happen, and more than it currently does. In between these two ends, there are uses of procurement for the promotion of labour and social standards, as well as for the promotion of human rights. Controversial as this position is, in my view, the use of procurement for the pursuit of those goals should be subjected to strict proportionality analysis in order to make sure that the secondary goal does not prevent the main purpose of the efficient satisfaction of public sector needs and/or needs in the public interest.

From a normative perspective, thus, I think that there is a wide space of synergy between procurement and environmental sustainability—which goes beyond green procurement and extends to the use of procurement to support a more circular economy—and that this can be used more effectively than is currently the case, due to emerging innovative uses of digital technologies for procurement governance.

This is the topic in which I would like to concentrate, to formulate some exploratory thoughts. The following reflections are focused on the EU context, but hopefully they are of a broader relevance. I first zoom in on the strategic priorities of fostering sustainability through procurement (2) and the digitalisation of procurement (3), as well as critically assess the current state of development of digital technologies for procurement governance (4). I then look at the interaction between both strategic goals, in terms of the potential for sustainable digital procurement (5), which leads to specific discussion of the need for an enabling data architecture (6), the potential for AI and sustainable procurement (7), the potential for the implementation of blockchains for sustainable procurement (8) and the need to refocus the emerging guidelines on the procurement of digital technologies to stress their sustainability dimension (9). Some final thoughts conclude (10).

2. Public procurement and sustainability

As mentioned above, the use of public procurement to promote sustainability is not a new topic. However, it has been receiving increasing attention in recent policy-making and legislative efforts (see eg this recent update)—though they are yet to translate in the level of practical change required to make a relevant contribution to pressing challenges, such as the climate emergency (for a good critique, see this recent post by Lela Mélon).

Facilitating the inclusion of sustainability-related criteria in procurement was one of the drivers for the new rules in the 2014 EU Public Procurement Package, which create a fairly flexible regulatory framework. Most remaining problems are linked to the implementation of such a framework, not its regulatory design. Cost, complexity and institutional inertia are the main obstacles to a broader uptake of sustainable procurement.

The European Commission is alive to these challenges. In its procurement strategy ‘Making Procurement work in and for Europe’ [COM(2017) 572 final; for a critical assessment, see here], the Commission stressed the need to facilitate and to promote the further uptake of strategic procurement, including sustainable procurement.

However, most of its proposals are geared towards the publication of guidance (such as the Buying Green! Handbook), standardised solutions (such as the library of EU green public procurement criteria) and the sharing of good practices (such as in this library of use cases) and training materials (eg this training toolkit). While these are potentially useful interventions, the main difficulty remains in their adoption and implementation at Member State level.

EcoInno.png

While it is difficult to have a good view of the current situation (see eg the older studies available here, and the terrible methodology used for this 2015 PWC study for the Commission), it seems indisputable that there are massive differences across EU Member States in terms of sustainability-oriented innovation in procurement.

Taking as a proxy the differences that emerge from the Eco-Innovation Scoreboard, it seems clear that this very different level of adoption of sustainability-related eco-innovation is likely reflective of the different approaches followed by the contracting authorities of the different Member States.

Such disparities create difficulties for policy design and coordination, as is acknowledged by the Commission and the limitations of its procurement strategy. The main interventions are thus dependent on Member States (and their sub-units).

3. Public procurement digitalisation beyond e-Procurement

Similarly to the discussion above, the bidirectional relationship between the use of procurement as a tool to foster innovation, and the adaptation of procurement processes in light of technological innovations is not a new issue. In fact, the transition to electronic procurement (eProcurement) was also one of the main drivers for the revision of the EU rules that resulted in the 2014 Public Procurement Package, as well as the flanking regulation of eInvoicing and the new rules on eForms. eProcurement (broadly understood) is thus an area where further changes will come to fruition within the next 5 years (see timeline below).

Picture 1.png

However, even a maximum implementation of the EU-level eProcurement rules would still fall short of creating a fully digitalised procurement system. There are, indeed, several aspects where current technological solutions can enable a more advanced and comprehensive eProcurement system. For example, it is possible to automate larger parts of the procurement process and to embed compliance checks (eg in solutions such as the Prozorro system developed in Ukraine). It is also possible to use the data automatically generated by the eProcurement system (or otherwise consolidated in a procurement register) to develop advanced data analytics to support procurement decision-making, monitoring, audit and the deployment of additional screens, such as on conflicts of interest or competition checks.

Progressing the national eProcurement systems to those higher levels of functionality would already represent progress beyond the mandatory eProcurement baseline in the 2014 EU Public Procurement Package and the flanking initiatives listed above; and, crucially, enabling more advanced data analytics is one of the effects sought with the new rules on eForms, which aim to significantly increase the availability of (better) procurement data for transparency purposes.

Although it is an avenue mainly explored in other jurisdictions, and currently in the US context, it is also possible to create public marketplaces akin to Amazon/eBay/etc to generate a more user-friendly interface for different types of catalogue-based eProcurement systems (see eg this recent piece by Chris Yukins).

Beyond that, the (further) digitalisation of procurement is another strategic priority for the European Commission; not only for procurement’s sake, but also in the context of the wider strategy to create an AI-friendly regulatory environment and to use procurement as a catalyst for innovations of broader application – along lines of the entrepreneurial State (Mazzucato, 2013; see here for an adapted shorter version).

Indeed, the Commission has formulated a bold(er) vision for future procurement systems based on emerging digital technologies, in which it sees a transformative potential: “New technologies provide the possibility to rethink fundamentally the way public procurement, and relevant parts of public administrations, are organised. There is a unique chance to reshape the relevant systems and achieve a digital transformation” (COM(2017) 572 fin at 11).

Even though the Commission has not been explicit, it may be worth trying to map which of the currently emerging digital technologies could be of (more direct) application to procurement governance and practice. Based on the taxonomy included in a recent OECD report (2019a, Annex C), it is possible to identify the following types and specific technologies with potential procurement application:

AI solutions

  • Virtual Assistants (Chat bots or Voice bots): conversational, computer-generated characters that simulate a conversation to deliver voice- or text-based information to a user via a Web, kiosk or mobile interface. A VA incorporates natural-language processing, dialogue control, domain knowledge and a visual appearance (such as photos or animation) that changes according to the content and context of the dialogue. The primary interaction methods are text-to-text, text-to-speech, speech-to-text and speech-to-speech;

  • Natural language processing: technology involves the ability to turn text or audio speech into encoded, structured information, based on an appropriate ontology. The structured data may be used simply to classify a document, as in “this report describes a laparoscopic cholecystectomy,” or it may be used to identify findings, procedures, medications, allergies and participants;

  • Machine Learning: the goal is to devise learning algorithms that do the learning automatically without human intervention or assistance;

  • Deep Learning: allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction;

  • Robotics: deals with the design, construction, operation, and use of robots, as well as computer systems for their control, sensory feedback, and information processing;

  • Recommender systems: subclass of information filtering system that seeks to predict the "rating" or "preference" that a user would give to an item;

  • Expert systems: is a computer system that emulates the decision-making ability of a human expert;

Digital platforms

  • Distributed ledger technology (DLT): is a consensus of replicated, shared, and synchronized digital data geographically spread across multiple sites, countries, or institutions. There is no central administrator or centralised data storage. A peer-to-peer network is required as well as consensus algorithms to ensure replication across nodes is undertaken; Blockchain is one of the most common implementation of DLT;

  • Smart contracts: is a computer protocol intended to digitally facilitate, verify, or enforce the negotiation or performance of a contract;

  • IoT Platform: platform on which to create and manage applications, to run analytics, and to store and secure your data in order to get value from the Internet of Things (IoT);

Not all technologies are equally relevant to procurement—and some of them are interrelated in a manner that requires concurrent development—but these seem to me to be those with a higher potential to support the procurement function in the future. Their development needs not take place solely, or primarily, in the context of procurement. Therefore, their assessment should be carried out in the broader setting of the adoption of digital technologies in the public sector.

4. Digital technologies & the public sector, including procurement

The emergence of the above mentioned digital technologies is now seen as a potential solution to complex public policy problems, such as the promotion of more sustainable public procurement. Keeping track of all the potential use cases in the public sector is difficult and the hype around buzzwords such as AI, blockchain or the internet of things (IoT) generates inflated claims of potential solutions to even some of the most wicked public policy problems (eg corruption).

This is reflective of the same hype in private markets, and in particular in financial and consumer markets, where AI is supposed to revolutionise the way we live, almost beyond recognition. There also seems to be an emerging race to the top (or rather, a copy-cat effect) in policy-making circles, as more and more countries adopt AI strategies in the hope of harnessing the potential of these technologies to boost economic growth.

In my view, digital technologies are receiving excessive attention. These are immature technologies and their likely development and usefulness is difficult to grasp beyond a relatively abstract level of potentiality. As such, I think these technologies may be receiving excessive attention from policy-makers and possibly also disproportionate levels of investment (diversion).

The implementation of digital technologies in the public sector faces a number of specific difficulties—not least, around data availability and data skills, as stressed in a recent OECD report (2019b). While it is probably beyond doubt that they will have an impact on public governance and the delivery of public services, it is more likely to be incremental rather than disruptive or revolutionary. Along these lines, another recent OECD report (2019c) stresses the need to take a critical look at the potential of artificial intelligence, in particular in relation to public sector use cases.

The OECD report (2019a) mentioned above shows how, despite these general strategies and the high levels of support at the top levels of policy-making, there is limited evidence of significant developments on the ground. This is the case, in particular, regarding the implementation of digital technologies in public procurement, where the OECD documents very limited developments (see table below).

Picture 1.png

Of course, this does not mean that we will not see more and more widespread developments in the coming years, but a note of caution is necessary if we are to embrace realistic expectations about the potential for significant changes resulting from procurement digitalisation. The following sections concentrate on the speculative analysis of such potential use of digital technologies to support sustainable procurement.

5. Sustainable digital procurement

Bringing together the scope for more sustainable public procurement (2), the progressive digitalisation of procurement (3), and the emergence of digital technologies susceptible of implementation in the public sector (4); the combined strategic goal (or ideal) would be to harness the potential of digital technologies to promote (more) sustainable procurement. This is a difficult exercise, surrounded by uncertainty, so the rest of this post is all speculation.

In my view, there are different ways in which digital technologies can be used for sustainability purposes. The contribution that each digital technology (DT) can make depends on its core functionality. In simple functional terms, my understanding is that:

  • AI is particularly apt for the massive processing of (big) data, as well as for the implementation of data-based machine learning (ML) solutions and the automation of some tasks (through so-called robotic process automation, RPA);

  • Blockchain is apt for the implementation of tamper-resistant/evident decentralised data management;

  • The internet of things (IoT) is apt to automate the generation of some data and (could be?) apt to breach the virtual/real frontier through oracle-enabled robotics

The timeline that we could expect for the development of these solutions is also highly uncertain, although there are expectations for some technologies to mature within the next four years, whereas others may still take closer to ten years.

© Gartner, Aug 2018.

© Gartner, Aug 2018.

Each of the core functionalities or basic strengths of these digital technologies, as well as their rate of development, will determine a higher or lower likelihood of successful implementation in the area of procurement, which is a highly information/data-sensitive area of public policy and administration. Therefore, it seems unavoidable to first look at the need to create an enabling data architecture as a priority (and pre-condition) to the deployment of any digital technologies.

6. An enabling data architecture as a priority

The importance of the availability of good quality data in the context of digital technologies cannot be over-emphasised (see eg OECD, 2019b). This is also clear to the European Commission, as it has also included the need to improve the availability of good quality data as a strategic priority. Indeed, the Commission stressed that “Better and more accessible data on procurement should be made available as it opens a wide range of opportunities to assess better the performance of procurement policies, optimise the interaction between public procurement systems and shape future strategic decisions” (COM(2017) 572 fin at 10-11).

However, despite the launch of a set of initiatives that seek to improve the existing procurement data architecture, there are still significant difficulties in the generation of data [for discussion and further references, see A Sanchez-Graells, “Data-driven procurement governance: two well-known elephant tales” (2019) 24(4) Communications Law 157-170; idem, “Some public procurement challenges in supporting and delivering smart urban mobility: procurement data, discretion and expertise”, in M Finck, M Lamping, V Moscon & H Richter (eds), Smart Urban Mobility – Law, Regulation, and Policy, MPI Studies on Intellectual Property and Competition Law (Springer 2020) forthcoming; and idem, “EU Public Procurement Policy and the Fourth Industrial Revolution: Pushing and Pulling as One?”, Working Paper for the YEL Annual Conference 2019 ‘EU Law in the era of the Fourth Industrial Revolution’].

To be sure, there are impending advances in the availability of quality procurement data as a result of the increased uptake of the Open Contracting Data Standards (OCDS) developed by the Open Contracting Partnership (OCP); the new rules on eForms; the development of eGovernment Application Programming Interfaces (APIs); the 2019 Open Data Directive; the principles of business to government data sharing (B2G data sharing); etc. However, it seems to me that the European Commission needs to exercise clearer leadership in the development of an EU-wide procurement data architecture. There is, in particular, one measure that could be easily adopted and would make a big difference.

The 2019 Open Data Directive (Directive 2019/1024/EU, ODD) establishes a special regime for high-value datasets, which need to be available free of charge (subject to some exceptions); machine readable; provided via APIs; and provided as a bulk download, where relevant (Art 14(1) ODD). Those high-value datasets are yet to be identified by the European Commission through implementing acts aimed at specifying datasets within a list of thematic categories included in Annex I, which includes the following datasets: geospatial; Earth observation and environment; meteorological; statistics; companies and company ownership; and mobility. In my view, most relevant procurement data can clearly fit within the category of statistical information.

More importantly, the directive specifies that the ‘identification of specific high-value datasets … shall be based on the assessment of their potential to: (a) generate significant socioeconomic or environmental benefits and innovative services; (b) benefit a high number of users, in particular SMEs; (c) assist in generating revenues; and (d) be combined with other datasets’ (Art 14(2) ODD). Given the high-potential of procurement data to unlock (a), (b) and (d), as well as, potentially, generate savings analogous to (c), the inclusion of datasets of procurement information in the future list of high-value datasets for the purposes of the Open Data Directive seems like an obvious choice.

Of course, there will be issues to iron out, as not all procurement information is equally susceptible of generating those advantages and there is the unavoidable need to ensure an appropriate balance between the publication of the data and the protection of legitimate (commercial) interests, as recognised by the Directive itself (Art 2(d)(iii) ODD) [for extended discussion, see here]. However, this would be a good step in the direction of ensuring the creation of a forward-looking data architecture.

At any rate, this is not really a radical idea. At least half of the EU is already publishing some public procurement open data, and many Eastern Partnership countries publish procurement data in OCDS (eg Moldova, Ukraine, Georgia). The suggestion here would bring more order into this bottom-up development and would help Member States understand what is expected, where to get help from, etc, as well as ensure the desirable level of uniformity, interoperability and coordination in the publication of the relevant procurement data.

Beyond that, in my view, more needs to be done to also generate backward-looking databases that enable the public sector to design and implement adequate sustainability policies, eg in relation to the repair and re-use of existing assets.

Only when the adequate data architecture is in place, will it be possible to deploy advanced digital technologies. Therefore, this should be given the highest priority by policy-makers.

7. Potential AI uses for sustainable public procurement

If/when sufficient data is available, there will be scope for the deployment of several specific implementations of artificial intelligence. It is possible to imagine the following potential uses:

  • Sustainability-oriented (big) data analytics: this should be relatively easy to achieve and it would simply be the deployment of big data analytics to monitor the extent to which procurement expenditure is pursuing or achieving specified sustainability goals. This could support the design and implementation of sustainability-oriented procurement policies and, where appropriate, it could generate public disclosure of that information in order to foster civic engagement and to feedback into political processes.

  • Development of sustainability screens/indexes: this would be a slight variation of the former and could facilitate the generation of synthetic data visualisations that reduced the burden of understanding the data analytics.

  • Machine Learning-supported data analysis with sustainability goals: this could aim to train algorithms to establish eg the effectiveness of sustainability-oriented procurement policies and interventions, with the aim of streamlining existing policies and to update them at a pace and level of precision that would be difficult to achieve by other means.

  • Sustainability-oriented procurement planning: this would entail the deployment of algorithms aimed at predictive analytics that could improve procurement planning, in particular to maximise the sustainability impact of future procurements.

Moreover, where clear rules/policies are specified, there will be scope for:

  • Compliance automation: it is possible to structure procurement processes and authorisations in such a way that compliance with pre-specified requirements is ensured (within the eProcurement system). This facilitates ex ante interventions that could minimise the risk of and the need for ex post contractual modifications or tender cancellations.

  • Recommender/expert systems: it would be possible to use machine learning to assist in the design and implementation of procurement processes in a way that supported the public buyer, in an instance of cognitive computing that could accelerate the gains that would otherwise require more significant investments in professionalisation and specialisation of the workforce.

  • Chatbot-enabled guidance: similarly to the two applications above, the use of procurement intelligence could underpin chatbot-enabled systems that supported the public buyers.

A further open question is whether AI could ever autonomously generate new sustainability policies. I dare not engage in such exercise in futurology…

8. Limited use of blockchain/DLTs for sustainable public procurement

Picture 1.png

By contrast with the potential for big data and the AI it can enable, the potential for blockchain applications in the context of procurement seems to me much more limited (for further details, see here, here and here). To put it simply, the core advantages of distributed ledger technologies/blockchain derive from their decentralised structure.

Whereas there are several different potential configurations of DLTs (see eg Rauchs et al, 2019 and Alessie et al, 2019, from where the graph is taken), the configuration of the blockchain affects its functionalities—with the highest levels of functionality being created by open and permissionless blockchains.

However, such a structure is fundamentally uninteresting to the public sector, which is unlikely to give up control over the system. This has been repeatedly stressed and confirmed in an overview of recent implementations (OECD, 2019a:16; see also OECD, 2018).

Moreover, even beyond the issue of public sector control, it should be stressed that existing open and permissionless blockchains operate on the basis of a proof-of-work (PoW) consensus mechanism, which has a very high carbon footprint (in particular in the case of Bitcoin). This also makes such systems inapt for sustainable digital procurement implementations.

Therefore, sustainable blockchain solutions (ie private & permissioned, based on proof-of-stake (PoS) or a similar consensus mechanisms), are likely to present very limited advantages for procurement implementation over advanced systems of database management—and, possibly, even more generally (see eg this interesting critical paper by Low & Mik, 2019).

Moreover, even if there was a way to work around those constraints and design a viable technical solution, that by itself would still not fix underlying procurement policy complexity, which will necessarily impose constraints on technologies that require deterministic coding, eg

  • Tenders on a blockchain - the proposals to use blockchain for the implementation of the tender procedure itself are very limited, in my opinion, by the difficulty in structuring all requirements on the basis of IF/THEN statements (see here).

  • Smart (public) contracts - the same constraints apply to smart contracts (see here and here).

  • Blockchain as an information exchange platform (Mélon, 2019, on file) - the proposals to use blockchain mechanisms to exchange information on best practices and tender documentation of successful projects could serve to address some of the confidentiality issues that could arise with ‘standard’ databases. However, regardless of the technical support to the exchange of information, the complexity in identifying best practices and in ensuring their replicability remains. This is evidenced by the European Commission’s Initiative for the exchange of information on the procurement of Large Infrastructure Projects (discussed here when it was announced), which has not been used at all in its first two years (as of 6 November 2019, there were no publicly-available files in the database).

9. Sustainable procurement of digital technologies

A final issue to take into consideration is that the procurement of digital technologies needs to itself incorporate sustainability considerations. However, this does not seem to be the case in the context of the hype and over-excitement with the experimentation/deployment of those technologies.

Indeed, there are emerging guidelines on procurement of some digital technologies, such as AI (UK, 2019) (WEF, 2019) (see here for discussion). However, as could be expected, these guidelines are extremely technology-centric and their interaction with broader procurement policies is not necessarily straightforward.

I would argue that, in order for these technologies to enable a more sustainable procurement, sustainability considerations need to be embedded not only in their application, but may well require eg an earlier analysis of whether the life-cycle of existing solutions warrants replacement, or the long-term impacts of the implementation of digital technologies (eg in terms of life-cycle carbon footprint).

Pursuing technological development for its own sake can have significant environmental impacts that must be assessed.

10. Concluding thoughts

This (very long…) blog post has structured some of my thoughts on the interaction of sustainability and digitalisation in the context of public procurement. By way of conclusion, I would just try to translate this into priorities for policy-making (and research). Overall, I believe that the main area of effort for policy-makers should now be in creating an enabling data architecture. Its regulation can thus focus research in the short term. In the medium-term, and as use cases become clearer in the policy-making sphere, research should be moving towards the design of digital technology-enabled solutions (for sustainable public procurement, but not only) and their regulation, governance and social impacts. The long-term is too difficult for me to foresee, as there is too much uncertainty. I can only guess that we will cross that bridge when/if we get there…

Reflecting on data-driven and digital procurement governance through two elephant tales

Elephants in a 13th century manuscript. THE BRITISH LIBRARY/ROYAL 12 F XIII

Elephants in a 13th century manuscript. THE BRITISH LIBRARY/ROYAL 12 F XIII

I have uploaded to SSRN the new paper ‘Data-driven and digital procurement governance: Revisiting two well-known elephant tales‘ (21 Aug 2019), which I will present at the Annual Conference of the IALS Information Law & Policy Centre on 22 November 2019.

The paper condenses my current thoughts about the obstacles for the deployment of data-driven digital procurement governance due to a lack of reliable quality procurement data sources, as well as my skepticism about the potential for blockchain-based solutions, including smart contracts, to have a significant impact in public procurement setting where the public buyer is extremely unlikely to give up centralised control of the procurement function. The abstract of the paper is as follows:

This paper takes the dearth of quality procurement data as an empirical point of departure to assess emerging regulatory trends in data-driven and digital public procurement governance and, in particular, the European Commission’s ambition for the single digital procurement market. It resorts to two well-known elephant tales to send a message of caution. It first appeals to the image of medieval bestiary elephants to stress the need to develop a better data architecture that reveals the real state of the procurement landscape, and for the European Commission to stop relying on bad data in the Single Market Scoreboard. The paper then assesses the promises of blockchain and smart contracts for procurement governance and raises the prospect that these may be new white elephants that do not offer significant advantages over existing sophisticated databases, or beyond narrow back-office applications—which leaves a number of unanswered questions regarding the desirability of their implementation. The paper concludes by advocating for EU policymakers to concentrate on developing an adequate data architecture to enable digital procurement governance.

If nothing else, I hope the two elephant tales are convincing.

New paper: ‘Screening for Cartels’ in Public Procurement: Cheating at Solitaire to Sell Fool’s Gold?

I have uploaded a new paper on SSRN, where I critically assess the bid rigging screening tool published by the UK’s Competition and Markets Authority in 2017. I will be presenting it in a few weeks at the V Annual meeting of the Spanish Academic Network for Competition Law. The abstract is as follows:

Despite growing global interest in the use of algorithmic behavioural screens, big data and machine learning to detect bid rigging in procurement markets, the UK’s Competition and Markets Authority (CMA) was under no obligation to undertake a project in this area, much less to publish a bid-rigging algorithmic screening tool and make it generally available. Yet, in 2017 and under self-imposed pressure, the CMA released ‘Screening for Cartels’ (SfC) as ‘a tool to help procurers screen their tender data for signs of illegal bid-rigging activity’ and has since been trying to raise its profile internationally. There is thus a possibility that the SfC tool is not only used by UK public buyers, but also disseminated and replicated in other jurisdictions seeking to implement ‘tried and tested’ solutions to screen for cartels. This paper argues that such a legal transplant would be undesirable.

In order to substantiate this main claim, and after critically assessing the tool, the paper tracks the origins of the indicators included in the SfC tool to show that its functionality is rather limited as compared with alternative models that were put to the CMA. The paper engages with the SfC tool’s creation process to show how it is the result of poor policy-making based on the material dismissal of the recommendations of the consultants involved in its development, and that this has resulted in the mere illusion that big data and algorithmic screens are being used to detect bid rigging in the UK. The paper also shows that, as a result of the ‘distributed model’ used by the CMA, the algorithms underlying the SfC tool cannot improved through training, the publication of the SfC tool lowers the likelihood of some types of ‘easy to spot cases’ by signalling areas of ‘cartel sophistication’ that can bypass its tests and that, on the whole, the tool is simply not fit for purpose. This situation is detrimental to the public interest because reliance in a defective screening tool can create a false perception of competition for public contracts, and because it leads to immobilism that delays (or prevents) a much-needed engagement with the extant difficulties in developing a suitable algorithmic screen based on proper big data analytics. The paper concludes that competition or procurement authorities willing to adopt the SfC tool would be buying fool’s gold and that the CMA was wrong to cheat at solitaire to expedite the deployment of a faulty tool.

The full citation of the paper is: Sanchez-Graells, Albert, ‘Screening for Cartels’ in Public Procurement: Cheating at Solitaire to Sell Fool’s Gold? (May 3, 2019). Available at SSRN: https://ssrn.com/abstract=3382270

Further thoughts on data and policy indicators a-propos two recent papers on procurement regulation & competition: comments re (Tas: 2019a&b)

The EUI Robert Schuman Centre for Advanced Studies’ working papers series has two interesting recent additions on the economic analysis of procurement regulation and its effects on competition, efficiency and value for money. Both papers are by BKO Tas.

The first paper: ‘Bunching Below Thresholds to Manipulate Public Procurement’ explores the effects of a contracting authority’s ‘bunching strategy’ to seek to exercise more discretion by artificially estimating the value of future contracts just below the thresholds that would trigger compliance with EU procurement rules. This paper is relevant to the broader discussion on the usefulness and adequacy of current EU (and WTO GPA) value thresholds (see eg the work of Telles, here and here), as well as on the regulatory decisions that EU Member States face on whether to extend the EU rules to ‘below-threshold’ contracts.

The second paper: ‘Effect of Public Procurement Regulation on Competition and Cost-Effectiveness’ uses the World Bank’s ‘Benchmarking Public Procurement’ quality scores to empirically test the positive effects of improved regulation quality on competition and value for money, measured as increases in the number of bidders and the probability that procurement price is lower than estimated cost. This paper is relevant in the context of recent discussions about the usefulness or not of procurement benchmarks, and regarding the increasing concern about reduced number of bids in EU-regulated public tenders.

In this blog post, I reflect on the methodology and insights of both papers, paying particular attention to the fact that both papers build on datasets and/or indexes (TED, the WB benchmark) that I find rather imperfect and unsuitable for this type of analysis (regarding TED, in the context of the Single Market Scoreboard for Public Procurement (SMPP) that builds upon it, see here; regarding the WB benchmark, see here). Therefore, not all criticisms below are to the papers themselves, but rather to the distortions that skewed, incomplete or misleading data and indicators can have on more refined analysis that builds upon them.

Bunching Below Thresholds to Manipulate Procurement (Tas: 2019a)

It is well-known that the EU procurement rules are based on a series of jurisdictional triggers and that one of them concerns value thresholds—currently regulated in Arts 4 & 5 of Directive 2014/24/EU. Contracts with an estimated value above those thresholds are subjected to the entire EU procurement regulation, whereas contracts of a lower value are solely subjected to principles-based requirements where they are of ‘cross-border interest’. Given the obvious temptation/interest in keeping procurement shielded from EU requirements, the EU Directives have included an anti-circumvention rule aimed at preventing Member States from artificially splitting contracts in order to keep their award below the relevant jurisdictional thresholds (Art 5(3) Dir 2014/24). This rule has been interpreted expansively by the Court of Justice of the European Union (see eg here).

‘Bunching Below Thresholds to Manipulate Public Procurement’ examines the effects of a practice that would likely infringe the anti-circumvention rule, as it assesses a strategy of ‘bunching estimated costs just below thresholds’ ‘to exercise more discretion in public procurement’. The paper develops a methodology to identify contracting authorities ‘that have higher probabilities of bunching estimated values below EU thresholds’ (ie manipulative authorities) and finds that ‘[m]anipulative authorities have significantly lower probabilities of employing competitive procurement procedure. The bunching manipulation scheme significantly diminishes cost-effectiveness of public procurement. On average, prices of below threshold contracts are 18-28% higher when the authority has an elevated probability of bunching.’ These are quite striking (but perhaps not surprising) results.

The paper employs a regression discontinuity approach to determine the likelihood of bunching. In order to do that, the paper relies on the TED database. The paper is certainly difficult to read and hardly intelligible for a lawyer, but there are some issues that raise important questions. One concerns the authors’ (mis)understanding of how the WTO GPA and the EU procurement rules operate, in particular when the paper states that ‘Contracts covered by the WTO GPA are subject to additional scrutiny by international organizations and authorities (sic). Accordingly, contracts covered by the WTO GPA are less likely to be manipulated by EU authorities’ (p. 12).  This is simply an acritical transplant of considerations made by the authors of a paper that examined procurement in the Czech Republic, where the relevant threshold between EU covered and non-EU covered procurement would make sense. Here, the distinction between WTO GPA and EU-covered procurement simply makes no sense, given that WTO GPA and EU thresholds are coordinated. This alone raises some issues concerning the tests designed by the author to check the robustness of the hypothesis that bunching leads to inefficiency in procurement expenditure.

Another issue concerns the way in which the author equates open procedures to a ‘first price auction mechanism’ (which they are not exactly) and dismisses other procedures (notably, the restricted procedure) as incapable of ensuring value for money or, more likely, as representative of a higher degree of discretion for the contracting authority—which is a highly questionable assumption.

More importantly, I am not sure that the author understood what is in the TED database and, crucially, what is not there (see section 2 of Tas (2019a) for methodology and data description). Albeit not very clearly, the author presents TED as a comprehensive database of procurement notices—ie, as if 100% of procurement expenditure by Member States was recorded there. However, in the specific context of bunching below thresholds, the TED database is very likely to be incomplete.

Contracting authorities tendering contracts below EU thresholds are under no obligation to publish a contract notice (Art 49 Dir 2014/24). They could publish voluntarily, in particular in the form of a voluntary ex ante transparency (VEAT) notice, but that would make no sense from the perspective of a contracting authority that seeks to avoid compliance with EU rules by bunching (ie manipulating) the estimated contract value, as that would expose it to potential litigation. Most authorities that are bunching their procurement needs (or, in simple terms) avoiding compliance with the EU rules will not be reflected in the TED database at all, or will not be identified by the methodology used by Tas (2019a), as they will not have filed any notices for contracts below thresholds.

How is it possible that TED includes notices regarding contracts below the EU thresholds, then? Well, this is anybody’s guess, but mine is that a large proportion of those notices will be linked to either countries with a tradition of full transparency (over-reporting), to contracts where there are any doubts about the potential cross-border interest (sometimes assessed over-cautiously), or will be notices with mistakes, where the estimated value of the contract is erroneously indicated as below thresholds.

Even if my guess was incorrect and all notices for contracts with a value below thresholds were accurate and justified by the existence of a potential cross-border interest, the database cannot be considered complete. One of the issues raised (imperfectly) by the Single Market Scoreboard (indicator [3] publication rate) is the relatively low level of procurement that is advertised in TED compared to the (putative/presumptive) total volume of procurement expenditure by the Member States. Without information on the conditions of the vast majority of contract awards (below thresholds, unreported, etc), any analysis of potential losses of competitiveness / efficiency in public expenditure (due to bunching or otherwise) is bound to be misleading.

Moreover, Tas (2019a) is premised on the hypothesis that procurement below EU thresholds allows for significantly more discretion than procurement above those thresholds. However, this hypothesis fails to recognise the variety of transposition strategies at Member State level. While some countries have opted for less stringent below EU threshold regimes, others have extended the EU rules to the entirety of their procurement (or, perhaps, to contracts up to and including much lower values than the EU thresholds, to the exception of some class of ‘micropurchases’). This would require the introduction of a control that could refine Tas’ analysis and distinguish those cases of bunching that do lead to more discretion and those that do not (at least formally)—which could perhaps distinguish between price effects derived from national-only transparency from those of more legally-dubious maneuvering.

In my view, regardless of the methodology and the math underpinning the paper (which I am in no position to assess in detail), once these data issues are taken into account, the story the paper tries to tell breaks down and there are important shortcomings in its empirical strategy that, in my view, raise significant issues around the strength of its findings—assessed not against the information in TED, but against the (largely unknown, unrecorded) reality of procurement in the EU.

I have no doubt that there is bunching in practice, and that the intuition that it raises procurement costs must be right, but I have serious doubts about the possibility to reliably identify bunching or estimate its effects on the basis of the information in TED, as most culprits will not be included and the effects of below threshold (national) competition only will mostly not be accounted for.

(Good) Regulation, Competition & Cost-Effectiveness (Tas: 2019b)

It is also a very intuitive hypothesis that better regulation should lead to better procurement outcomes and, consequently, that more open and robust procurement rules should lead to more efficiency in the expenditure of public funds. As mentioned above, Tas (2019b) explores this hypothesis and seeks to empirically test it using the TED database and the World Bank’s Benchmarking Public Procurement (in its 2017 iteration, see here). I will not repeat my misgivings about the use of the TED database as a reliable source of information. In this second part, I will solely comment on the use of the WB’s benchmark.

The paper relies on four of the WB’s benchmark indicators (one further constructed by Djankov et al (2017)): the ‘bid preparation score, bid and contract management score, payment of suppliers score and PP overall index’. The paper includes a useful table with these values (see Tas (2019b: Table 4)), which allows the author to rank the countries according to the quality of their procurement regulation. The findings of Tas (2019b) are thus entirely dependent on the quality of the WB’s benchmark and its ability to capture (and distinguish) good procurement regulation.

In order to test the extent to which the WB’s benchmark is a good input for this sort of analysis, I have compared it to the indicator that results from the European Commission’s Single Market Scoreboard for Public Procurement (SMSPP, in its 2018 iteration). The comparison is rather striking …

Source: own elaboration.

Source: own elaboration.

Clearly, both sets of indicators are based on different methodologies and measure relatively different things. However, they are both intended to express relevant regulators’ views on what constitutes ‘good procurement regulation’. In my view, both of them fail to do so for reasons already given (see here and here).

The implications for work such as Tas (2019b) is that the reliability of the findings—regardless of the math underpinning them—is as weak as the indicators they are based on. Likely, plugging the same methods to the SMSPP instead of the WB’s index would yield very different results—perhaps, that countries with very low quality of procurement regulation (as per the SMSPP index) achieve better economic results, which would not be a popular story with policy-makers…  and the results with either index would also be different if the algorithms were not fed by TED, but by a more comprehensive and reliable database.

So, the most that can be said is that attempts to empirically show effects of good (or poor) procurement regulation remain doomed to fail or , in perhaps less harsh terms, doomed to tell a story based on a very skewed, narrow and anecdotal understanding of procurement and an incomplete recording of procurement activity. Believe those stories at your own peril…

Data and procurement policy: some thoughts on the Single Market Scoreboard for public procurement

There is a growing interest in the use of big data to improve public procurement performance and to strengthen procurement governance. This is a worthy endeavour and, like many others, I am concentrating my research efforts in this area. I have not been doing this for too long. However, soon after one starts researching the topic, a preliminary conclusion clearly emerges: without good data, there is not much that can be done. No data, no fun. So far so good.

It is thus a little discouraging to confirm that, as is widely accepted, there is no good data architecture underpinning public procurement practice and policy in the EU (and elsewhere). Consequently, there is a rather limited prospect of any real implementation of big data-based solutions, unless and until there is a significant investment in the creation of a proper data foundation that can enable advanced analysis and policy-making. Adopting the Open Contracting Data Standard for the European Union would be a good place to start. We could then discuss to what extent the data needs to be fully open (hint: it should not be, see here and here), but let’s save that discussion for another day.

What a recent twitter threat has reminded me is that there is a bigger downside to the existence of poor data than being unable to apply advanced big data analytics: the formulation of procurement policy on the basis of poor data and poor(er) statistical analysis.

This reflection emerged on the basis of the 2018 iteration of the Single Market Scoreboard for Public Procurement (the SMSPP), which is the closest the European Commission is getting to data-driven policy analysis, as far as I can see. The SMSPP is still work in progress. As such, it requires some close scrutiny and, in my view, strong criticism. As I will develop in the rest of this post, the SMSPP is problematic not solely in the way it presents information—which is clearly laden by implicit policy judgements of the European Commission—but, more importantly, due to its inability to inform either cross-sectional (ie comparative) or time series (ie trend) analysis of public procurement policy in the single market. Before developing these criticisms, I will provide a short description of the SMSPP (as I understand it).

The Single Market Scoreboard for Public Procurement: what is it?

The European Commission has developed the broader Single Market Scoreboard (SMS) as an instrument to support its effort of monitoring compliance with internal market law. The Commission itself explains that the “scoreboard aims to give an overview of the practical management of the Single Market. The scoreboard covers all those areas of the Single Market where sufficient reliable data are available. Certain areas of the Single Market such as financial services, transport, energy, digital economy and others are closely monitored separately by the responsible Commission services“ (emphasis added). The SMS organises information in different ways, such as by stage in the governance cycle; by performance per Member State; by governance tool; by policy area or by state of trade integration and market openness (the latter two are still work in progress).

The SMS for public procurement (SMSPP) is an instance of SMS by policy area. It thus represents the Commission’s view that the SMSPP is (a) based on sufficiently reliable data, as it is fed from the database resulting from the mandatory publications of procurement notices in the Tenders Electronic Daily (TED), and (b) a useful tool to provide an overview of the functioning of the single market for public procurement or, in other words of the ‘performance’ of public procurement, defined as a measure of ‘whether purchasers get good value for money‘.

The SMSPP determines the overall performance of a given Member States by aggregating a number of indicators. Currently, the SMSPP is based on 12 indicators (it used to be based on a smaller number, as discussed below): [1] Single bidder; [2] No calls for bids; [3] Publication rate; [4] Cooperative procurement; [5] Award criteria; [6] Decision speed; [7] SME contractors; [8] SME bids; [9] Procedures divided into lots; [10] Missing calls for bids; [11] Missing seller registration numbers; [12] Missing buyer registration numbers. As the SMSPP explains, the addition of these indicators results in the measure of ‘overall performance’, which

is a sum of scores for all 12 individual indicators (by default, a satisfactory performance in an individual indicator increases the overall score by one point while an unsatisfactory performance reduces it by one point). The 3 most important are triple-weighted (Single bidder, No calls for bids and Publication rate). This is because they are linked with competition, transparency and market access–the core principles of good public procurement. Indicators 7-12 receive a one-third weighting. This is because they measure the same concepts from different perspectives: participation by small firms (indicators 7-9) and data quality (indicators 10-12).

The most recent snapshot of overall procurement performance is represented in the map below, which would indicate that procurement policy is rather disfunctional—as most EEA countries do not seem to be doing very well.

Source: European Commission, 2018 Single Market Scorecard for Public Procurement (based on 2017 data).

Source: European Commission, 2018 Single Market Scorecard for Public Procurement (based on 2017 data).

In my view, this use of the available information is very problematic: (a) to begin with, because the data in TED can hardly be considered ‘sufficiently reliable‘. The database in TED has problems of various sorts because it is a database that is constructed as a result of the self-declaration of data by the contracting authorities of the Member States, which makes its content very dishomogeneous and difficult to analyse, including significant problems of under-inclusiveness, definitional fuzziness and the lack of filtering of errors—as recognised, repeatedly, in the methodology underpinning the SMSPP itself. This should make one take the results of the SMSPP with more than a pinch of salt. However, these are not all the problems implicit in the SMSPP.

More importantly: (b) the definition of procurement performance and the ways in which the SMSPP seeks to assess it are far from universally accepted. They are rather judgement-laden and reflect the policy biases of the European Commission without making this sufficiently explicit. This issue requires further elaboration.

The SMSPP as an expression of policy-making: more than dubious judgements

I already criticised the Single Market Scoreboard for public procurement three years ago, mainly on the basis that some of the thresholds adopted by the European Commission to establish whether countries performed well or poorly in relation to a given indicator were not properly justified or backed by empirical evidence. Unfortunately, this remains the case and the Commission is yet to make a persuasive case for its decision that eg, in relation to indicator [4] Cooperative procurement, countries that aggregate 10% or more of their procurement achieve good procurement performance, while countries that aggregate less than 10% do not.

Similar issues arise with other indicators, such as [3] Publication rate, which measures the value of procurement advertised on TED as a proportion of national Gross Domestic Product (GDP). It is given threshold values of more than 5% for good performance and less than 2.5% for poor performance. The Commission considers that this indicator is useful because ‘A higher score is better, as it allows more companiesto bid, bringing better value for money. It also means greater transparency, as more information is available to the public.’ However, this is inconsistent with the fact that the SMSPP methodology stresses that it is affected by the ‘main shortcoming … that it does not reflect the different weight that government spending has in the economy of a particular’ Member State (p. 13). It also fails to account for different economic models where some Member States can retain a much larger in-house capability than others, as well as failing to reflect other issues such as fiscal policies, etc. Moreover, the SMSPP includes a note that says that ‘Due to delays in data availability, these results are based on 2015 data (also used in the 2016 scoreboard). However, given the slow changes to this indicator, 2015 results are still relevant.‘ I wonder how is it possible to establishes that there are ‘slow changes’ to the indicator where there is no more current information. On the whole, this is clearly an indicator that should be dropped, rather than included with such a phenomenal number of (partially hidden) caveats.

On the whole, then, the SMSPP and a number of the indicators on which it is based is reflective of the implicit policy biases of the European Commission. In my view, it is disingenuous to try to save this by simply stressing that the SMSPP and its indicators

Like all indicators, however, they simplify reality. They are affected by country-specific factors such as what is actually being bought, the structure of the economies concerned, and the relationships between different tendering options, none of which are taken into account. Also, some aspects of public procurement have been omitted entirely or covered only indirectly, e.g. corruption, the administrative burden and professionalism. So, although the Scoreboard provides useful information, it gives only a partial view of EU countries' public procurement performance.

I would rather argue that, in these conditions, the SMSPP is not really useful. In particular, because it fails to enable analysis that could offer some valuable insights even despite the shortcomings of the underlying indicators: first, a cross-sectional analysis by comparing different countries under a single indicator; second, a trend analysis of evolution of procurement “performance” in the single market and/or in a given country.

The SMSPP and cross-sectional analysis: not fit for purpose

This criticism is largely implicit in the previous discussion, as the creation of indicators that are not reflective of ‘country-specific factors such as what is actually being bought, the structure of the economies concerned, and the relationships between different tendering options’ by itself prevents meaningful comparisons across the single market. Moreover, a closer look at the SMSPP methodology reveals that there are further issues that make such cross-sectional analysis difficult. To continue the discussion concerning indicator [4] Cooperative procurement, it is remarkable that the SMSPP methodology indicates that

[In previous versions] the only information on cooperative procurement was a tick box indicating that "The contracting authority is purchasing on behalf of other contracting authorities". This was intended to mean procurement in one of two cases: "The contract is awarded by a central purchasing body" and "The contract involves joint procurement". This has been made explicit in the [current methodology], where these two options are listed instead of the option on joint procurement. However, as always, there are exceptions to how uniformly this definition has been accepted across the EU. Anecdotally, in Belgium, this field has been interpreted as meaning that the management of the procurement procedure has been outsource[d] (e.g. to a legal company) -which explains the high values of this indicator for Belgium.

In simple terms, what this means is that the data point for Belgium (and any other country?) should have been excluded from analysis. In contrast, the SMSPP presents Belgium as achieving a good performance under this indicator—which, in turn, skews the overall performance of the country (which is, by the way, one of the few achieving positive overall performance… perhaps due to these data issues?).

This should give us some pause before we decide to give any meaning to cross-country comparisons at all. Additionally, as discussed below, we cannot (simply) rely on year-on-year comparisons of the overall performance of any given country.

The SMSPP and time series analysis: not fit for purpose

Below is a comparison of the ‘overall performance’ maps published in the last five iterations of the SMSPP.

Source: own elaboration, based on the European Commission’s Single Market Scoreboard for Public Procurement for the years 2014-2018 (please note that this refers to publication years, whereas the data on which each of the reports is based correspond…

Source: own elaboration, based on the European Commission’s Single Market Scoreboard for Public Procurement for the years 2014-2018 (please note that this refers to publication years, whereas the data on which each of the reports is based corresponds to the previous year).

One would be tempted to read these maps as representing a time series and thus as allowing for trend analysis. However, that is not the case, for various reasons. First, the overall performance indicator has been constructed on the basis of different (sub)indicators in different iterations of the SMSPP:

  • the 2014 iteration was based on three indicators: bidder participation; accessibility and efficiency.

  • the 2015 SMSPP included six indicators: single bidder; no calls for bids; publication rate; cooperative procurement; award criteria and decision speed.

  • the 2016 SMSPP also included six indicators. However, compared to 2015, the 2016 SMSPP omitted ‘publication rate’ and instead added an indicator on ‘reporting problems’.

  • the 2017 SMSPP expanded to 9 indicators. Compared to 2016, the 2017 SMSPP reintroduced ‘publication rate’ and replaced ‘reporting problems’ for indicators on ‘missing values’, ‘missing calls for bids’ and ‘missing registration numbers’.

  • the 2018 SMSPP, as mentioned above, is based on 12 indicators. Compared to 2017, the 2018 SMSPP has added indicators on ‘SME contractors’, ‘SME bids’ and ‘procedures divided into lots’. It has also deleted the indicator ‘missing values’ and disaggregated the ‘missing registration numbers’ into ‘missing seller registration numbers’ and ‘missing buyer registration numbers’.

It is plain that there are no two consecutive iterations of the SMSPP based on comparable indicators. Moreover, the way that the overall performance is determined has also changed. While the SMSPP for 2014 to 2017 established the overall performance as a ‘deviation from the average’ of sorts, whereby countries were given ‘green’ for overall marks above 90% of the average mark, ‘yellow’ for overall marks between 80 and 90% of the average mark, and ‘red’ for marks below 80% of the average mark; in the 2018 SMSPP, ‘green’ indicates a score above 3, ‘yellow’ indicates a score below 3 and above -3, and ‘red’ indicates a score below -3. In other words, the colour coding for the maps has changed from a measure of relative performance to a measure of absolute performance—which, in fairness, could be more meaningful.

As a result of these (and, potentially, other) issues, the SMSPP is clearly unable to support trend analysis, either at single market or country level. However, despite the disclaimers in the published documents, this remains a risk (to the extent that anyone really engages with the SMSPP).

Overall conclusion

The example of the SMSPP does not augur very well for the adoption of data analytics-based policy-making. This is a case where, despite acknowledging shortcomings in the methodology and the data, the Commission has pressed on, seemingly on the premise that ‘some data (analysis) is better than none’. However, in my view, this is the wrong approach. To put it plainly, the SMSPP is rather useless. However, it may create the impression that procurement data is being used to design policy and support its implementation. It would be better for the Commission to stop publishing the SMSPP until the underlying data issues are corrected and the methodology is streamlined. Otherwise, the Commission is simply creating noise around data-based analysis of procurement policy, and this can only erode its reputation as a policy-making body and the guardian of the single market.


An incomplete overview of (the promises of) GovTech: some thoughts on Engin & Treleaven (2019)

I have just read the interesting paper by Z Engin & P Treleaven, 'Algorithmic Government: Automating Public Services and Supporting Civil Servants in using Data Science Technologies' (2019) 62(3) The Computer Journal 448–460, https://doi.org/10.1093/comjnl/bxy082 (available on open access). The paper offers a very useful, but somehow inaccurate and slightly incomplete, overview of data science automation being deployed by governments world-wide (ie GovTech), including the technologies of artificial intelligence (AI), Internet of Things (IoT), big data, behavioral/predictive analytics, and blockchain. I found their taxonomy of GovTech services particularly thought-provoking.

Source: Engin & Treleaven (2019: 449).

Source: Engin & Treleaven (2019: 449).

In the eyes of a lawyer, the use of the word ‘Government’ to describe all these activities is odd, in particular concerning the category ‘Statutes and Compliance’ (at least on the Statutes part). Moving past that conceptual issue—which reminds us once more of the need for more collaboration between computer scientist and social scientists, including lawyers—the taxonomy still seems difficult to square with an analysis of the use of GovTech for public procurement governance and practice. While some of its aspects could be subsumed as tools to ‘Support Civil Servants’ or under ‘National Public Records’, the transactional aspects of public procurement and the interaction with public contractors seem more difficult to place in this taxonomy (even if the category of ‘National Physical Infrastructure’ is considered). Therefore, either additional categories or more granularity is needed in order to have a more complete view of the type of interactions between technology and public sector activity (broadly defined).

The paper is also very limited regarding LawTech, as it primarily concentrates on online dispute resolution (ODR) mechanisms, which is only a relatively small aspect of the potential impact of data science automation on the practice of law. In that regard, I would recommend reading the (more complex, but very useful) book by K D Ashley, Artificial Intelligence and Legal Analytics. New Tools for Law Practice in the Digital Age (Cambridge, CUP, 2017).

I would thus recommend reading Engin & Treleaven (2019) with an open mind, and using it more as a collection of examples than a closed taxonomy.

Procurement governance and complex technologies: a promising future?

Thanks to the UK’s Procurement Lawyers’ Association (PLA) and in particular Totis Kotsonis, on Wednesday 6 March 2019, I will have the opportunity to present some of my initial thoughts on the potential impact of complex technologies on procurement governance.

In the presentation, I will aim to critically assess the impacts that complex technologies such as blockchain (or smart contracts), artificial intelligence (including big data) and the internet of things could have for public procurement governance and oversight. Taking the main risks of maladministration of the procurement function (corruption, discrimination and inefficiency) on which procurement law is based as the analytical point of departure, the talk will explore the potential improvements of governance that different complex technologies could bring, as well as any new governance risks that they could also generate.

The slides I will use are at the end of this post. Unfortunately, the hyperlinks do not work, so please email me if you are interested in a fully-accessible presentation format (a.sanchez-graells@bristol.ac.uk).

The event is open to non-PLA members. So if you are in London and fancy joining the conversation, please register following the instructions in the PLA’s event page.