"Tech fixes for procurement problems?" [Recording]

The recording and slides for yesterday’s webinar on ‘Tech fixes for procurement problems?’ co-hosted by the University of Bristol Law School and the GW Law Government Procurement Programme are now available for catch up if you missed it.

I would like to thank once again Dean Jessica Tillipman (GW Law), Professor Sope Williams (Stellenbosch), and Eliza Niewiadomska (EBRD) for really interesting discussion, and to all participants for their questions. Comments most welcome, as always.

Digital procurement governance: drawing a feasibility boundary

In the current context of generalised quick adoption of digital technologies across the public sector and strategic steers to accelerate the digitalisation of public procurement, decision-makers can be captured by techno hype and the ‘policy irresistibility’ that can ensue from it (as discussed in detail here, as well as here).

To moderate those pressures and guide experimentation towards the successful deployment of digital solutions, decision-makers must reassess the realistic potential of those technologies in the specific context of procurement governance. They must also consider which enabling factors must be put in place to harness the potential of the digital technologies—which primarily relate to an enabling big data architecture (see here). Combined, the data requirements and the contextualised potential of the technologies will help decision-makers draw a feasibility boundary for digital procurement governance, which should inform their decisions.

In a new draft chapter (num 7) for my book project, I draw such a technology-informed feasibility boundary for digital procurement governance. This post provides a summary of my main findings, on which I will welcome any comments: a.sanchez-graells@bristol.ac.uk. The full draft chapter is free to download: A Sanchez-Graells, ‘Revisiting the promise: A feasibility boundary for digital procurement governance’ to be included in A Sanchez-Graells, Digital Technologies and Public Procurement. Gatekeeping and experimentation in digital public governance (OUP, forthcoming). Available at SSRN: https://ssrn.com/abstract=4232973.

Data as the main constraint

It will hardly be surprising to stress again that high quality big data is a pre-requisite for the development and deployment of digital technologies. All digital technologies of potential adoption in procurement governance are data-dependent. Therefore, without adequate data, there is no prospect of successful adoption of the technologies. The difficulties in generating an enabling procurement data architecture are detailed here.

Moreover, new data rules only regulate the capture of data for the future. This means that it will take time for big data to accumulate. Accessing historical data would be a way of building up (big) data and speeding up the development of digital solutions. Moreover, in some contexts, such as in relation with very infrequent types of procurement, or in relation to decisions concerning previous investments and acquisitions, historical data will be particularly relevant (eg to deploy green policies seeking to extend the use life of current assets through programmes of enhanced maintenance or refurbishment; see here). However, there are significant challenges linked to the creation of backward-looking digital databases, not only relating to the cost of digitisation of the information, but also to technical difficulties in ensuring the representativity and adequate labelling of pre-existing information.

An additional issue to consider is that a number of governance-relevant insights can only be extracted from a combination of procurement and other types of data. This can include sources of data on potential conflict of interest (eg family relations, or financial circumstances of individuals involved in decision-making), information on corporate activities and offerings, including detailed information on products, services and means of production (eg in relation with licensing or testing schemes), or information on levels of utilisation of public contracts and satisfaction with the outcomes by those meant to benefit from their implementation (eg users of a public service, or ‘internal’ users within the public administration).

To the extent that the outside sources of information are not digitised, or not in a way that is (easily) compatible or linkable with procurement information, some data-based procurement governance solutions will remain undeliverable. Some developments in digital procurement governance will thus be determined by progress in other policy areas. While there are initiatives to promote the availability of data in those settings (eg the EU’s Data Governance Act, the Guidelines on private sector data sharing, or the Open Data Directive), the voluntariness of many of those mechanisms raises important questions on the likely availability of data required to develop digital solutions.

Overall, there is no guarantee that the data required for the development of some (advanced) digital solutions will be available. A careful analysis of data requirements must thus be a point of concentration for any decision-maker from the very early stages of considering digitalisation projects.

Revised potential of selected digital technologies

Once (or rather, if) that major data hurdle is cleared, the possibilities realistically brought by the functionality of digital technologies need to be embedded in the procurement governance context, which results in the following feasibility boundary for the adoption of those technologies.

Robotic Process Automation (RPA)

RPA can reduce the administrative costs of managing pre-existing digitised and highly structured information in the context of entirely standardised and repetitive phases of the procurement process. RPA can reduce the time invested in gathering and cross-checking information and can thus serve as a basic element of decision-making support. However, RPA cannot increase the volume and type of information being considered (other than in cases where some available information was not being taken into consideration due to eg administrative capacity constraints), and it can hardly be successfully deployed in relation to open-ended or potentially contradictory information points. RPA will also not change or improve the processes themselves (unless they are redesigned with a view to deploying RPA).

This generates a clear feasibility boundary for RPA deployment, which will generally have as its purpose the optimisation of the time available to the procurement workforce to engage in information analysis rather than information sourcing and basic checks. While this can clearly bring operational advantages, it will hardly transform procurement governance.

Machine Learning (ML)

Developing ML solutions will pose major challenges, not only in relation to the underlying data architecture (as above), but also in relation to specific regulatory and governance requirements specific to public procurement. Where the operational management of procurement does not diverge from the equivalent function in the (less regulated) private sector, it will be possible to see the adoption or adaptation of similar ML solutions (eg in relation to category spend management). However, where there are regulatory constraints on the conduct of procurement, the development of ML solutions will be challenging.

For example, the need to ensure the openness and technical neutrality of procurement procedures will limit the possibilities of developing recommender systems other than in pre-procured closed lists or environments based on framework agreements or dynamic purchasing systems underpinned by electronic catalogues. Similarly, the intended use of the recommender system may raise significant legal issues concerning eg the exercise of discretion, which can limit their deployment to areas of information exchange or to merely suggestion-based tasks that could hardly replace current processes and procedures. Given the limited utility (or acceptability) of collective filtering recommender solutions (which is the predominant type in consumer-facing private sector uses, such as Netflix or Amazon), there are also constraints on the generality of content-based recommender systems for procurement applications, both at tenderer and at product/service level. This raises a further feasibility issue, as the functional need to develop a multiplicity of different recommenders not only reopens the issue of data sufficiency and adequacy, but also raises questions of (economic and technical) viability. Recommender systems would mostly only be susceptible of feasible adoption in highly centralised procurement settings. This could create a push for further procurement centralisation that is not neutral from a governance perspective, and that can certainly generate significant competition issues of a similar nature, but perhaps a different order of magnitude, than procurement centralisation in a less digitally advanced setting. This should be carefully considered, as the knock-on effects of the implementation of some ML solutions may only emerge down the line.

Similarly, the development and deployment of chatbots is constrained by specific regulatory issues, such as the need to deploy closed domain chatbots (as opposed to open domain chatbots, ie chatbots connected to the Internet, such as virtual assistants built into smartphones), so that the information they draw from can be controlled and quality assured in line with duties of good administration and other legal requirements concerning the provision of information within tender procedures. Chatbots are suited to types of high-volume information-based queries only. They would have limited applicability in relation to the specific characteristics of any given procurement procedure, as preparing the specific information to be used by the chatbot would be a challenge—with the added functionality of the chatbot being marginal. Chatbots could facilitate access to pre-existing and curated simple information, but their functionality would quickly hit a ceiling as the complexity of the information progressed. Chatbots would only be able to perform at a higher level if they were plugged to a knowledge base created as an expert system. But then, again, in that case their added functionality would be marginal. Ultimately, the practical space for the development of chatbots is limited to low added value information access tasks. Again, while this can clearly bring operational advantages, it will hardly transform procurement governance.

ML could facilitate the development and deployment of ‘advanced’ automated screens, or red flags, which could identify patterns of suspicious behaviour to then be assessed against the applicable rules (eg administrative and criminal law in case of corruption, or competition law, potentially including criminal law, in case of bid rigging) or policies (eg in relation to policy requirements to comply with specific targets in relation to a broad variety of goals). The trade off in this type of implementation is between the potential (accuracy) of the algorithmic screening and legal requirements on the explainability of decision-making (as discussed in detail here). Where the screens were not used solely for policy analysis, but acting on the red flag carried legal consequences (eg fines, or even criminal sanctions), the suitability of specific types of ML solutions (eg unsupervised learning solutions tantamount to a ‘black box’) would be doubtful, challenging, or altogether excluded. In any case, the development of ML screens capable of significantly improving over RPA-based automation of current screens is particularly dependent on the existence of adequate data, which is still proving an insurmountable hurdle in many an intended implementation (as above).

Distributed ledger technology (DLT) systems and smart contracts

Other procurement governance constraints limit the prospects of wholesale adoption of DLT (or blockchain) technologies, other than for relatively limited information management purposes. The public sector can hardly be expected to adopt DLT solutions that are not heavily permissioned, and that do not include significant safeguards to protect sensitive, commercially valuable, and other types of information that cannot be simply put in the public domain. This means that the public sector is only likely to implement highly centralised DLT solutions, with the public sector granting permissions to access and amend the relevant information. While this can still generate some (degrees of) tamper-evidence and permanence of the information management system, the net advantage is likely to be modest when compared to other types of secure information management systems. This can have an important bearing on decisions whether DLT solutions meet cost effectiveness or similar criteria of value for money controlling their piloting and deployment.

The value proposition of DLT solutions could increase if they enabled significant procurement automation through smart contracts. However, there are massive challenges in translating procurement procedures to a strict ‘if/when ... then’ programmable logic, smart contracts have limited capability that is not commensurate with the volumes and complexity of procurement information, and their development would only be justified in contexts where a given smart contract (ie specific programme) could be used in a high number of procurement procedures. This limits its scope of applicability to standardised and simple procurement exercises, which creates a functional overlap with some RPA solutions. Even in those settings, smart contracts would pose structural problems in terms of their irrevocability or automaticity. Moreover, they would be unable to generate off-chain effects, and this would not be easily sorted out even with the inclusion of internet of things (IoT) solutions or software oracles. This comes to largely restrict smart contracts to an information exchange mechanism, which does not significantly increase the value added by DLT plus smart contract solutions for procurement governance.

Conclusion

To conclude, there are significant and difficult to solve hurdles in generating an enabling data architecture, especially for digital technologies that require multiple sources of information or data points regarding several phases of the procurement process. Moreover, the realistic potential of most technologies primarily concerns the automation of tasks not involving data analysis of the exercise of procurement discretion, but rather relatively simple information cross-checks or exchanges. Linking back to the discussion in the earlier broader chapter (see here), the analysis above shows that a feasibility boundary emerges whereby the adoption of digital technologies for procurement governance can make contributions in relation to its information intensity, but not easily in relation to its information complexity, at least not in the short to medium term and not in the absence of a significant improvement of the required enabling data architecture. Perhaps in more direct terms, in the absence of a significant expansion in the collection and curation of data, digital technologies can allow procurement governance to do more of the same or to do it quicker, but it cannot enable better procurement driven by data insights, except in relatively narrow settings. Such settings are characterised by centralisation. Therefore, the deployment of digital technologies can be a further source of pressure towards procurement centralisation, which is not a neutral development in governance terms.

This feasibility boundary should be taken into account in considering potential use cases, as well as serve to moderate the expectations that come with the technologies and that can fuel ‘policy irresistibility’. Further, it should be stressed that those potential advantages do not come without their own additional complexities in terms of new governance risks (eg data and data systems integrity, cybersecurity, skills gaps) and requirements for their mitigation. These will be explored in the next stage of my research project.

Public procurement governance as an information-intensive exercise, and the allure of digital technologies

I have just started a 12-month Mid-Career Fellowship funded by the British Academy with the purpose of writing up the monograph Digital Technologies and Public Procurement. Gatekeeping and experimentation in digital public governance (OUP, forthcoming).

In the process of writing up, I will be sharing some draft chapters and other thought pieces. I would warmly welcome feedback that can help me polish the final version. As always, please feel free to reach out: a.sanchez-graells@bristol.ac.uk.

In this first draft chapter (num 6), I explore the technological promise of digital governance and use public procurement as a case study of ‘policy irresistibility’. The main ideas in the chapter are as follows:

This Chapter takes a governance perspective to reflect on the process of horizon scanning and experimentation with digital technologies. The Chapter stresses how aspirations of digital transformation can drive policy agendas and make them vulnerable to technological hype, despite technological immaturity and in the face of evidence of the difficulty of rolling out such transformation programmes—eg regarding the still ongoing wave of transition to e-procurement. Delivering on procurement’s goals of integrity, efficiency and transparency requires facing challenges derived from the information intensity and complexity of procurement governance. Digital technologies promise to bring solutions to such informational burden and thus augment decisionmakers’ ability to deal with that complexity and with related uncertainty. The allure of the potential benefits of deploying digital technologies generates ‘policy irresistibility’ that can capture decision-making by policymakers overly exposed to the promise of technological fixes to recalcitrant governance challenges. This can in turn result in excessive experimentation with digital technologies for procurement governance in the name of transformation. The Chapter largely focuses on the EU policy framework, but the insights derived from this analysis are easily exportable.

Another draft chapter (num 7) will follow soon with more detailed analysis of the feasibility boundary for the adoption of digital technologies for procurement governance purposes. The full details of this draft chapter are as follows: A Sanchez-Graells, ‘The technological promise of digital governance: procurement as a case study of “policy irresistibility”’ to be included in A Sanchez-Graells, Digital Technologies and Public Procurement. Gatekeeping and experimentation in digital public governance (OUP, forthcoming). Available at SSRN: https://ssrn.com/abstract=4216825.

Algorithmic transparency: some thoughts on UK's first four published disclosures and the standards' usability

© Fabrice Jazbinsek / Flickr.

The Algorithmic Transparency Standard (ATS) is one of the UK’s flagship initiatives for the regulation of public sector use of artificial intelligence (AI). The ATS encourages (but does not mandate) public sector entities to fill in a template to provide information about the algorithmic tools they use, and why they use them [see e.g. Kingsman et al (2022) for an accessible overview].

The ATS is currently being piloted, and has so far resulted in the publication of four disclosures relating to the use of algorithms in different parts of the UK’s public sector. In this post, I offer some thoughts based on these initial four disclosures, in particular from the perspective of the usability of the ATS in facilitating an enhanced understanding of AI use cases, and accountability for those.

The first four disclosed AI use cases

The ATS pilot has so far published information in two batches (on 1 June and 6 July 2022), comprising the following four AI use cases:

  1. Within Cabinet Office, the GOV.UK Data Labs team piloted the ATS for their Related Links tool; a recommendation engine built to aid navigation of GOV.UK (the primary UK central government website) by providing relevant onward journeys from a content page, with the aim of helping users find useful information and content, aiding navigation.

  2. In the Department for Health and Social Care and NHS Digital, the QCovid team piloted the ATS with a COVID-19 clinical tool used to predict how at risk individuals might be from COVID-19. The tool was developed for use by clinicians in support of conversations with patients about personal risk, and it uses algorithms to combine a number of factors such as age, sex, ethnicity, height and weight (to calculate BMI), and specific health conditions and treatments in order to estimate the combined risk of catching coronavirus and being hospitalised or catching coronavirus and dying. Importantly, “The original version of the QCovid algorithms were also used as part of the Population Risk Assessment to add patients to the Shielded Patient List in February 2021. These patients were advised to shield at that time were provided support for doing so, and were prioritised for COVID-19 vaccination.

  3. The Information Commissioner's Office has piloted the ATS with its Registration Inbox AI, which uses a machine learning algorithm to categorise emails sent to the Information Commissioner's Office’s registration inbox and to send out an auto-reply where the algorithm “detects … a request about changing a business address. In cases where it detects this kind of request, the algorithm sends out an autoreply that directs the customer to a new online service and points out further information required to process a change request. Only emails with an 80% certainty of a change of address request will be sent an email containing the link to the change of address form.”

  4. The Food Standards Agency piloted the ATS with its Food Hygiene Rating Scheme (FHRS) – AI, which is an algorithmic tool to help local authorities to prioritise inspections of food businesses based on their predicted food hygiene rating by predicting which establishments might be at a higher risk of non-compliance with food hygiene regulations. Importantly, the tool is of voluntary use and “it is not intended to replace the current approach to generate a FHRS score. The final score will always be the result of an inspection undertaken by [a local authority] officer.

Harmless (?) use cases

At first glance, and on the basis of the implications of the outcome of the algorithmic recommendation, it would seem that the four use cases are relatively harmless, i.e..

  1. If GOV.UK recommends links to content that is not relevant or helpful, the user may simply ignore them.

  2. The outcome of the QCovid tool simply informs the GPs’ (or other clinicians’) assessment of the risk of their patients, and the GPs’ expertise should mediate any incorrect (either over-inclusive, or under-inclusive) assessments by the AI.

  3. If the ICO sends an automatic email with information on how to change their business address to somebody that had submitted a different query, the receiver can simply ignore that email.

  4. Incorrect or imperfect prioritisation of food businesses for inspection could result in the early inspection of a low-risk restaurant, or the late(r) inspection of a higher-risk restaurant, but this is already a risk implicit in allowing restaurants to open pending inspection; AI does not add risk.

However, this approach could be too simplistic or optimistic. It can be helpful to think about what could really happen if the AI got it wrong ‘in a disaster scenario’ based on possible user reactions (a useful approach promoted by the Data Hazards project). It seems to me that, on ‘worse case scenario’ thinking (and without seeking to be exhaustive):

  1. If GOV.UK recommends content that is not helpful but is confusing, the user can either engage in red tape they did not need to complete (wasting both their time and public resources) or, worse, feel overwhelmed, confused or misled and abandon the administrative interaction they were initially seeking to complete. This can lead to exclusion from public services, and be particularly problematic if these situations can have a differential impact on different user groups.

  2. There could be over-reliance on the QCovid algorithm by (too busy) GPs. This could lead to advising ‘as a matter of routine’ the taking of excessive precautions with significant potential impacts on the day to day lives of those affected—as was arguably the case for some of the citizens included in shielding categories in the earlier incarnation of the algorithm. Conversely, GPs that identified problems in the early use of the algorithm could simply ignore it, thus potentially losing the benefits of the algorithm in other cases where it could have been helpful—potentially leading to under-precaution by individuals that could have otherwise been better safeguarded.

  3. Similarly to 1, the provision of irrelevant and potentially confusing information can lead to waste of resource (e.g. users seeking to change their business registration address because they wrongly think it is a requirement to process their query or, at a lower end of the scale, users having to read and consider information about an administrative process they have no interest in). Beyond that, the classification algorithm could generate loss of queries if there was no human check to verify that the AI classification was correct. If this check takes place anyway, the advantages of automating the sending of the initial email seem rather marginal.

  4. Similar to 2, the incorrect prediction of risk can lead to misuse of resources in the carrying out of inspections by local authorities, potentially pushing down the list of restaurants pending inspection some that are high-risk and that could thus be seen their inspection repeatedly delayed. This could have important public health implications, at least for those citizens using the to be inspected restaurants for longer than they would otherwise have. Conversely, inaccurate prioritisations that did not seem to catch more ‘risky’ restaurants could also lead to local authorities abandoning its use. There is also a risk of profiling of certain types of businesses (and their owners), which could lead to victimisation if the tool was improperly used, or used in relation to restaurants that have been active for a longer period (eg to trigger fresh (re)inspections).

No AI application is thus entirely harmless. Of course, this is just a matter of theoretical speculation—as could also be speculated whether reduced engagement with the AI would generate a second tier negative effect, eg if ‘learning’ algorithms could not be revised and improved on the basis of ‘real-life’ feedback on whether their predictions were or not accurate.

I think that this sort of speculation offers a useful yardstick to assess the extent to which the ATS can be helpful and usable. I would argue that the ATS will be helpful to the extent that (a) it provides information susceptible of clarifying whether the relevant risks have been taken into account and properly mitigated or, failing that (b) it provides information that can be used to challenge the insufficiency of any underlying risk assessments or mitigation strategies. Ultimately, AI transparency is not an end in itself, but simply a means of increasing accountability—at least in the context of public sector AI adoption. And it is clear that any degree of transparency generated by the ATS will be an improvement on the current situation, but is the ATS really usable?

Finding out more on the basis of the ATS disclosures

To try to answer that general question on whether the ATS is usable and serves to facilitate increased accountability, I have read the four disclosures in full. Here is my summary/extracts of the relevant bits for each of them.

GOV.UK Related Links

Since May 2019, the tool has been using an algorithm called node2vec (machine learning algorithm that learns network node embeddings) to train a model on the last three weeks of user movement data (web analytics data). The benefits are described as “the tool … predicts related links for a page. These related links are helpful to users. They help users find the content they are looking for. They also help a user find tangentially related content to the page they are on; it’s a bit like when you are looking for a book in the library, you might find books that are relevant to you on adjacent shelves.

The way the tool works is described in some more detail: “The tool updates links every three weeks and thus tracks changes in user behaviour.” “Every three weeks, the machine learning algorithm is trained using the last three weeks of analytics data and trains a model that outputs related links that are published, overwriting the existing links with new ones.” “The average click through rate for related links is about 5% of visits to a content page. For context, GOV.UK supports an average of 6 million visits per day (Jan 2022). True volumes are likely higher owing to analytics consent tracking. We only track users who consent to analytics cookies …”.

The decision process is fully automated, but there is “a way for publishers to add/amend or remove a link from the component. On average this happens two or three times a month.” “Humans have the capability to recommend changes to related links on a page. There is a process for links to be amended manually and these changes can persist. These human expert generated links are preferred to those generated by the model and will persist.” Moreover, “GOV.UK has a feedback link, “report a problem with this page”, on every page which allows users to flag incorrect links or links they disagree with.” The tool was subjected to a Data Protection Impact Assessment (DPIA), but no other impact assessments (IAs) are listed.

When it comes to risk identification and mitigation, the disclosure indicates: “A recommendation engine can produce links that could be deemed wrong, useless or insensitive by users (e.g. links that point users towards pages that discuss air accidents).” and that, as mitigation: “We added pages to a deny list that might not be useful for a user (such as the homepage) or might be deemed insensitive (e.g. air accident reports). We also enabled publishers or anyone with access to the tagging system to add/amend or remove links. GOV.UK users can also report problems through the feedback mechanisms on GOV.UK.

Overall, then, the risk I had identified is only superficially identified, in that the ATS disclosure does not show awareness of the potential differing implications of incorrect or useless recommendations across the spectrum. The narrative equating the recommendations to browsing the shelves of a library is quite suggestive in that regard, as is the fact that the quality controls are rather limited.

Indeed, it seems that the quality control mechanisms require a high level of effort by every publisher, as they need to check every three weeks whether the (new) related links appearing in each of the pages they publish are relevant and unproblematic. This seems to have reversed the functional balance of convenience. Before the implementation of the tool, only approximately 2,000 out of 600,000 pieces of content on GOV.UK had related links, as they had to be created manually (and thus, hopefully, were relevant, if not necessarily unproblematic). Now, almost all pages have up to five related content suggestions, but only two or three out of 600,000 pages see their links manually amended per month. A question arises whether this extremely low rate of manual intervention is reflective of the high quality of the system, or the reverse evidence of lack of resource to quality-assure websites that previously prevented 98% of pages from having this type of related information.

However, despite the queries as to the desirability of the AI implementation as described, the ATS disclosure is in itself useful because it allows the type of analysis above and, in case someone considers the situation unsatisfactory or would like to prove it further, there are is a clear gateway to (try to) engage the entity responsible for this AI deployment.

QCovid algorithm

The algorithm was developed at the onset of the Covid-19 pandemic to drive government decisions on which citizens to advise to shield, support during shielding, and prioritise for vaccination rollout. Since the end of the shielding period, the tool has been modified. “The clinical tool for clinicians is intended to support individual conversations with patients about risk. Originally, the goal was to help patients understand the reasons for being asked to shield and, where relevant, help them do so. Since the end of shielding requirements, it is hoped that better-informed conversations about risk will have supported patients to make appropriate decisions about personal risk, either protecting them from adverse health outcomes or to some extent alleviating concerns about re-engaging with society.

In essence, the tool creates a risk calculation based on scoring risk factors across a number of data fields pertaining to demographic, clinical and social patient information.“ “The factors incorporated in the model include age, ethnicity, level of deprivation, obesity, whether someone lived in residential care or was homeless, and a range of existing medical conditions, such as cardiovascular disease, diabetes, respiratory disease and cancer. For the latest clinical tool, separate versions of the QCOVID models were estimated for vaccinated and unvaccinated patients.

It is difficult to assess how intensely the tool is (currently) used, although the ATS indicates that “In the period between 1st January 2022 and 31st March 2022, there were 2,180 completed assessments” and that “Assessment numbers often move with relative infection rate (e.g. higher infection rate leads to more usage of the tool).“ The ATS also stresses that “The use of the tool does not override any clinical decision making but is a supporting device in the decision making process.” “The tool promotes shared decision making with the patient and is an extra point of information to consider in the decision making process. The tool helps with risk/benefit analysis around decisions (e.g. recommendation to shield or take other precautionary measures).

The impact assessment of this tool is driven by those mandated for medical devices. The description is thus rather technical and not very detailed, although the selected examples it includes do capture the possibility of somebody being misidentified “as meeting the threshold for higher risk”, as well as someone not having “an output generated from the COVID-19 Predictive Risk Model”. The ATS does stress that “As part of patient safety risk assessment, Hazardous scenarios are documented, yet haven’t occurred as suitable mitigation is introduced and implemented to alleviate the risk.” That mitigation largely seems to be that “The tool is designed for use by clinicians who are reminded to look through clinical guidance before using the tool.

I think this case shows two things. First, that it is difficult to understand how different parts of the analysis fit together when a tool that has had two very different uses is the object of a single ATS disclosure. There seems to be a good argument for use case specific ATS disclosures, even if the underlying AI deployment is the same (or a closely related one), as the implications of different uses from a governance perspective also differ.

Second, that in the context of AI adoption for healthcare purposes, there is a dual barrier to accessing relevant (and understandable) information: the tech barrier and the medical barrier. While the ATS does something to reduce the former, the latter very much remains in place and perhaps turn the issue of trustworthiness of the AI to trustworthiness of the clinician, which is not necessarily entirely helpful (not only in this specific use case, but in many other one can imagine). In that regard, it seems that the usability of the ATS is partially limited, and more could be done to increase meaningful transparency through AI-specific IAs, perhaps as proposed by the Ada Lovelace Institute.

In this case, the ATS disclosure has also provided some valuable information, but arguably to a lesser extent than the previous case study.

ICO’s Registration Inbox AI

This is a tool that very much resembles other forms of email classification (e.g. spam filters), as “This algorithmic tool has been designed to inspect emails sent to the ICO’s registration inbox and send out autoreplies to requests made about changing addresses. The tool has not been designed to automatically change addresses on the requester’s behalf. The tool has not been designed to categorise other types of requests sent to the inbox.

The disclosure indicates that “In a significant proportion of emails received, a simple redirection to an online service is all that is required. However, sifting these types of emails out would also require time if done by a human. The algorithm helps to sift out some of these types of emails that it can then automatically respond to. This enables greater capacity for [Data Protection] Fees Officers in the registration team, who can, consequently, spend more time on more complex requests.” “There is no manual intervention in the process - the links are provided to the customer in a fully automated manner.

The tool has been in use since May 2021 and classifies approximately 23,000 emails a month.

When it comes to risk identification and mitigation, the ATS disclosure stresses that “The algorithmic tool does not make any decisions, but instead provides links in instances where it has calculated the customer has contacted the ICO about an address change, giving the customer the opportunity to self-serve.” Moreover, it indicates that there is “No need for review or appeal as no decision is being made. Incorrectly classified emails would receive the default response which is an acknowledgement.” It further stresses that “The classification scope is limited to a change of address and a generic response stating that we have received the customer’s request and that it will be processed within an estimated timeframe. Incorrectly classified emails would receive the default response which is an acknowledgement. This will not have an impact on personal data. Only emails with an 80% certainty of a change of address request will be sent an email containing the link to the change of address form.”

In my view, this disclosure does not entirely clarify the way the algorithm works (e.g. what happens to emails classified as having requested information on change of address? Are they ‘deleted’ from the backlog of emails requiring a (human) non-automated response?). However, it does provide sufficient information to further consolidate the questions arising from the general description. For example, it seems that the identification of risks is clearly partial in that there is not only a risk of someone asking for change of address information not automatically receiving it, but also a risk of those asking for other information receiving the wrong information. There is also no consideration of additional risks (as above), and the general description makes the claim of benefits doubtful if there has to be a manual check to verify adequate classification.

The ATS disclosure does not provide sufficient contact information for the owner of the AI (perhaps because they were contracted on limited after service terms…), although there is generic contact information for the ICO that could be used by someone that considered the situation unsatisfactory or would like to prove it further.

Food Hygiene Rating Scheme – AI

This tool is also based on machine learning to make predictions. “A machine learning framework called LightGBM was used to develop the FHRS AI model. This model was trained on data from three sources: internal Food Standards Agency (FSA) FHRS data, publicly available Census data from the 2011 census and open data from HERE API. Using this data, the model is trained to predict the food hygiene rating of an establishment awaiting its first inspection, as well as predicting whether the establishment is compliant or not.” “Utilising the service, the Environmental Health Officers (EHOs) are provided with the AI predictions, which are supplemented with their knowledge about the businesses in the area, to prioritise inspections and update their inspection plan.”

Regarding the justification for the development, the disclosure stresses that “the number of businesses classified as ‘Awaiting Inspection’ on the Food Hygiene Rating Scheme website has increased steadily since the beginning of the pandemic. This has been the key driver behind the development of the FHRS AI use case.” “The objective is to help local authorities become more efficient in managing the hygiene inspection workload in the post-pandemic environment of constrained resources and rapidly evolving business models.

Interestingly, the disclosure states that the tool “has not been released to actual end users as yet and hence the maintenance schedule is something that cannot be determined at this point in time (June 2022). The Alpha pilot started at the beginning of April 2022, wherein the end users (the participating Local Authorities) have access to the FHRS AI service for use in their day-to-day workings. This section will be updated depending on the outcomes of the Alpha Pilot ...” It remains to be seen whether there will be future updates on the disclosure, but an error in copy-pasting in the ATS disclosure makes it contain the same paragraph but dated February 2022. This stresses the need to date and reference (eg v.1, v.2) the successive versions of the same disclosure, which does not seem to be a field of the current template, as well as to create a repository of earlier versions of the same disclosure.

The section on oversight stresses that “the system has been designed to provide decision support to Local Authorities. FSA has advised Local Authorities to never use this system in place of the current inspection regime or use it in isolation without further supporting information”. It also stresses that “Since there will be no change to the current inspection process by introducing the model, the existing appeal and review mechanisms will remain in place. Although the model is used for prioritisation purposes, it should not impact how the establishment is assessed during the inspection and therefore any challenges to a food hygiene rating would be made using the existing FHRS appeal mechanism.”

The disclosure also provides detailed information on IAs: “The different impact assessments conducted during the development of the use case were 1. Responsible AI Risk Assessment; 2. Stakeholder Impact Assessment; [and] 3. Privacy Impact Assessment.” Concerning the responsible AI risk assessment, in addition to a personal data issue that should belong in the DPIA, the disclosure reports three identified risks very much in line with the ones I had hinted at above: “2. Potential bias from the model (e.g. consistently scoring establishments of a certain type much lower, less accurate predictions); 3. Potential bias from inspectors seeing predicted food hygiene ratings and whether the system has classified the establishment as compliant or not. This may have an impact on how the organisation is perceived before receiving a full inspection; 4. With the use of AI/ML there is a chance of decision automation bias or automation distrust bias occurring. Essentially, this refers to a user being over or under reliant on the system leading to a degradation of human-reasoning.”

The disclosure presents related mitigation strategies as follows: “2. Integration of explainability and fairness related tooling during exploration and model development. These tools will also be integrated and monitored post-alpha testing to detect and mitigate potential biases from the system once fully operational; 3. Continuously reflect, act and justify sessions with business and technical subject matter experts throughout the delivery of the project, along with the use of the three impact assessments outlined earlier to identify, assess and manage project risks; 4. Development of usage guidance for local authorities specifically outlining how the service is expected to be used. This document also clearly states how the service should not be used, for example, the model outcome must not be the only indicator used when prioritising businesses for inspection.

In this instance, the ATS disclosure is in itself useful because it allows the type of analysis above and, in case someone considers the situation unsatisfactory or would like to prove it further, there are is a clear gateway to (try to) engage the entity responsible for this AI deployment. It is also interesting to see that the disclosure specifies that the private provider was engaged “As well as [in] a development role [… to provide] Responsible AI consulting and delivery services, including the application of a parallel Responsible AI sprint to assess risk and impact, enable model explainability and assess fairness, using a variety of artefacts, processes and tools”. This is clearly reflected in the ATS disclosure and could be an example of good practice where organisations lack that in-house capability and/or outsource the development of the AI. Whether that role should fall with the developer, or should rather be separate to avoid organisational conflicts of interest is a discussion for another day.

Final thoughts

There seems to be a mixed picture on the usability of the ATS disclosures, with some of them not entirely providing (full) usability, or a clear pathway to engage with the specific entity in charge of the development of the algorithmic tool, specifically if it was an outsourced provider. In those cases, the public authority that has implemented the AI (even if not the owner of the project) will have to deal with any issues arising from the disclosure. There is also a mixed practice concerning linking to resources other than previously available (open) data (eg open source code, data sources), with only one project (GOV.UK) including them in the disclosures discussed above.

It will be interesting to see how this assessment scales up (to use a term) once disclosures increase in volume. There is clearly a research opportunity arising as soon as more ATS disclosures are published. As a hypothesis, I would submit that disclosure quality is likely to reduce with volume, as well as with the withdrawal of whichever support the pilot phase has meant for those participating institutions. Let’s see how that empirical issue can be assessed.

The other reflection I have to offer based on these first four disclosures is that there are points of information in the disclosures that can be useful, at least from an academic (and journalistic?) perspective, to assess the extent to which the public sector has the capabilities it needs to harness digital technologies (more on that soon in this blog).

The four reviewed disclosures show that there was one in-house development (GOV.UK), while the other ones were either procured (QCovid, which disclosure includes a redacted copy of the contract), or contracted out, perhaps even directly awarded (ICO email classifier FSA FHRS - AI). And there are some in between the line indications that some of the implementations may have been relatively randomly developed, unless there was strong pre-existing reliable statistical data (eg on information requests concerning change of business address). Which in itself triggers questions on the procurement or commissioning strategy developed by institutions seeking to harness AI potential.

From this perspective, the ATS disclosures can be a useful source of information on the extent to which the adoption of AI by the public sector depends as strongly on third party capabilities as the literature generally hypothesises or/and is starting to demonstrate empirically.

Challenges and Opportunities for UK Procurement During and After the Pandemic

On 30 April, I delivered a webinar on “Challenges and Opportunities for UK Procurement During and After the Pandemic” for the LUPC/SUPC Annual Conference. The slides are available via SlideShare and the recording is available via YouTube (below). Feedback most welcome: a.sanchez-graells@bristol.ac.uk.

LUPC/SUPC Conference 2020 30th April - Webinar 1 Challenges and Opportunities for UK Procurement During and After the COVID-19 Crisis Led by: Professor Alber...

3 priorities for policy-makers thinking of AI and machine learning for procurement governance

138369750_9f3b5989f9_w.jpg

I find that carrying out research in the digital technologies and governance field can be overwhelming. And that is for an academic currently having the luxury of full-time research leave… so I can only imagine how much more overwhelming it must be for policy-makers thinking about the adoption of artificial intelligence (AI) and machine learning for procurement governance, to identify potential use cases and to establish viable deployment strategies.

Prioritisation seems particularly complicated, as managing such a significant change requires careful planning and paying attention to a wide variety of potential issues. However, getting prioritisation right is probably the best way of increasing the chances of success for the deployment of digital technologies for procurement governance — as well as in other areas of Regtech, such as financial supervision.

This interesting speech by James Proudman (Executive Director of UK Deposit Takers Supervision, Bank of England) on 'Managing Machines: the governance of artificial intelligence', precisely focuses on such issues. And I find the conclusions particularly enlightening:

First, the observation that the introduction of AI/ML poses significant challenges around the proper use of data, suggests that boards should attach priority to the governance of data – what data should be used; how should it be modelled and tested; and whether the outcomes derived from the data are correct.

Second, the observation that the introduction of AI/ML does not eliminate the role of human incentives in delivering good or bad outcomes, but transforms them, implies that boards should continue to focus on the oversight of human incentives and accountabilities within AI/ML-centric systems.

And third, the acceleration in the rate of introduction of AI/ML will create increased execution risks during the transition that need to be overseen. Boards should reflect on the range of skill sets and controls that are required to mitigate these risks both at senior level and throughout the organisation.

These seem to me directly transferable to the context of procurement governance and the design of strategies for the deployment of AI and machine learning, as well as other digital technologies.

First, it is necessary to create an enabling data architecture and to put significant thought into how to extract value from the increasingly available data. In that regard, there are two opportunities that should not be missed. One concerns the treatment of procurement datasets as high-value datasets for the purposes of the special regime of the Open Data Directive (for more details, see section 6 here), which will require careful consideration of the content and level of openness of procurement data in the context of the domestic transpositions that need to be in place by 17 July 2021. The other, related opportunity concerns the implementation of the new rules on eForms for procurement data publications, which Member States need to adopt by 14 November 2022. Building on the data architecture that will result from both sets of changes—which should be coordinated—will allow for the deployment of data analytics and machine learning techniques. The purposes and goals of such deployments also need to be considered carefully, as well as their potential implications.

Second, it seems clear that the changes in the management of procurement data and the quick development of analytics that can support procurement decision-making pile some additional training and upskilling needs on the already existing (and partially unaddressed?) current challenges of full consolidation of eProcurement across the EU. Moreover, it should be clear that there is no such thing as an objective and value neutral implementation of technological governance solutions and that all levels of accountability need to be provided with adequate data skills and digital literacy upgrades in order to check what is being done at the technical level (for crystal-clear discussion, see van der Voort et al, 'Rationality and politics of algorithms. Will the promise of big data survive the dynamics of public decision making?' (2019) 36(1) Government Information Quarterly 27-38). Otherwise, governance mechanism would be at risk of failure due to techno-capture and/or techno-blindness, whether intended or accidental.

Third, there is an increasing need to manage change and the risks that come with it. In a notoriously risk averse policy field such as procurement, this is no minor challenge. This should also prompt some rethinking of the way the procurement function is organised and its risk-management mechanisms.

Addressing these priorities will not be easy or cheap, but these are the fundamental building blocks required to enable the public procurement sector to benefit from the benefits of digital technologies as they mature. In consultancy jargon, these are the priorities to ‘future-proof’ procurement strategies. Will they be adopted?

Postscript

It is worth adding that, in particular the first and second issues, lend themselves to strong collaborations between policy-makers and academics. As rightly pointed out by Pencheva et al, 'Big Data and AI – A transformational shift for government: So, what next for research?' (2018) Public Policy and Administration, advanced access at 16:

... governments should also support the efforts for knowledge creation and analysis by opening up their data further, collaborating with – and actively seeking inputs from – researchers to understand how Big Data can be utilised in the public sector. Ultimately, the supporting field of academic thought will only be as strong as the public administration practice allows it to be.

Digital technologies, public procurement and sustainability: some exploratory thoughts

download.jpeg

** This post is based on the seminar given at the Law Department of Pompeu Fabra University in Barcelona, on 7 November 2019. The slides for the seminar are available here. Please note that some of the issues have been rearranged. I am thankful to participants for the interesting discussion, and to Dr Lela Mélon and Prof Carlos Gómez Ligüerre for the kind invitation to participate in this activitity of their research group on patrimonial law. I am also grateful to Karolis Granickas for comments on an earlier draft. The standard disclaimer applies.**

Digital technologies, public procurement and sustainability: some exploratory thoughts

1. Introductory detour

The use of public procurement as a tool to further sustainability goals is not a new topic, but rather the object of a long-running discussion embedded in the broader setting of the use of procurement for the pursuit of horizontal or secondary goals—currently labelled smart or strategic procurement. The instrumentalisation of procurement for (quasi)regulatory purposes gives rise to a number of issues, such as: regulatory transfer; the distortion of the very market mechanisms on which procurement rules rely as a result of added regulatory layers and constraints; legitimacy and accountability issues; complex regulatory impact assessments; professionalisation issues; etc.

Discussions in this field are heavily influenced by normative and policy positions, which are not always clearly spelled out but still drive most of the existing disagreement. My own view is that the use of procurement for horizontal policies is not per se desirable. The simple fact that public expenditure can act as a lever/incentive to affect private (market) behaviour does not mean that it should be used for that purpose at every opportunity and/or in an unconstrained manner. Procurement should not be used in lieu of legislation or administrative regulation where it is a second-best regulatory tool. Embedding regulatory elements that can also achieve horizontal goals in the procurement process should only take place where it has clear synergies with the main goal of procurement: the efficient satisfaction of public sector needs and/or needs in the public interest. This generates a spectrum of potential uses of procurement of a different degree of desirability.

At one end, and at its least desirable, procurement can and is used as a trade barrier for economic protectionism. In my view, this should not happen. At the other end of the spectrum, at its most desirable, procurement can and is (sometimes) used in a manner that supports environmental sustainability and technical innovation. In my view, this should happen, and more than it currently does. In between these two ends, there are uses of procurement for the promotion of labour and social standards, as well as for the promotion of human rights. Controversial as this position is, in my view, the use of procurement for the pursuit of those goals should be subjected to strict proportionality analysis in order to make sure that the secondary goal does not prevent the main purpose of the efficient satisfaction of public sector needs and/or needs in the public interest.

From a normative perspective, thus, I think that there is a wide space of synergy between procurement and environmental sustainability—which goes beyond green procurement and extends to the use of procurement to support a more circular economy—and that this can be used more effectively than is currently the case, due to emerging innovative uses of digital technologies for procurement governance.

This is the topic in which I would like to concentrate, to formulate some exploratory thoughts. The following reflections are focused on the EU context, but hopefully they are of a broader relevance. I first zoom in on the strategic priorities of fostering sustainability through procurement (2) and the digitalisation of procurement (3), as well as critically assess the current state of development of digital technologies for procurement governance (4). I then look at the interaction between both strategic goals, in terms of the potential for sustainable digital procurement (5), which leads to specific discussion of the need for an enabling data architecture (6), the potential for AI and sustainable procurement (7), the potential for the implementation of blockchains for sustainable procurement (8) and the need to refocus the emerging guidelines on the procurement of digital technologies to stress their sustainability dimension (9). Some final thoughts conclude (10).

2. Public procurement and sustainability

As mentioned above, the use of public procurement to promote sustainability is not a new topic. However, it has been receiving increasing attention in recent policy-making and legislative efforts (see eg this recent update)—though they are yet to translate in the level of practical change required to make a relevant contribution to pressing challenges, such as the climate emergency (for a good critique, see this recent post by Lela Mélon).

Facilitating the inclusion of sustainability-related criteria in procurement was one of the drivers for the new rules in the 2014 EU Public Procurement Package, which create a fairly flexible regulatory framework. Most remaining problems are linked to the implementation of such a framework, not its regulatory design. Cost, complexity and institutional inertia are the main obstacles to a broader uptake of sustainable procurement.

The European Commission is alive to these challenges. In its procurement strategy ‘Making Procurement work in and for Europe’ [COM(2017) 572 final; for a critical assessment, see here], the Commission stressed the need to facilitate and to promote the further uptake of strategic procurement, including sustainable procurement.

However, most of its proposals are geared towards the publication of guidance (such as the Buying Green! Handbook), standardised solutions (such as the library of EU green public procurement criteria) and the sharing of good practices (such as in this library of use cases) and training materials (eg this training toolkit). While these are potentially useful interventions, the main difficulty remains in their adoption and implementation at Member State level.

EcoInno.png

While it is difficult to have a good view of the current situation (see eg the older studies available here, and the terrible methodology used for this 2015 PWC study for the Commission), it seems indisputable that there are massive differences across EU Member States in terms of sustainability-oriented innovation in procurement.

Taking as a proxy the differences that emerge from the Eco-Innovation Scoreboard, it seems clear that this very different level of adoption of sustainability-related eco-innovation is likely reflective of the different approaches followed by the contracting authorities of the different Member States.

Such disparities create difficulties for policy design and coordination, as is acknowledged by the Commission and the limitations of its procurement strategy. The main interventions are thus dependent on Member States (and their sub-units).

3. Public procurement digitalisation beyond e-Procurement

Similarly to the discussion above, the bidirectional relationship between the use of procurement as a tool to foster innovation, and the adaptation of procurement processes in light of technological innovations is not a new issue. In fact, the transition to electronic procurement (eProcurement) was also one of the main drivers for the revision of the EU rules that resulted in the 2014 Public Procurement Package, as well as the flanking regulation of eInvoicing and the new rules on eForms. eProcurement (broadly understood) is thus an area where further changes will come to fruition within the next 5 years (see timeline below).

Picture 1.png

However, even a maximum implementation of the EU-level eProcurement rules would still fall short of creating a fully digitalised procurement system. There are, indeed, several aspects where current technological solutions can enable a more advanced and comprehensive eProcurement system. For example, it is possible to automate larger parts of the procurement process and to embed compliance checks (eg in solutions such as the Prozorro system developed in Ukraine). It is also possible to use the data automatically generated by the eProcurement system (or otherwise consolidated in a procurement register) to develop advanced data analytics to support procurement decision-making, monitoring, audit and the deployment of additional screens, such as on conflicts of interest or competition checks.

Progressing the national eProcurement systems to those higher levels of functionality would already represent progress beyond the mandatory eProcurement baseline in the 2014 EU Public Procurement Package and the flanking initiatives listed above; and, crucially, enabling more advanced data analytics is one of the effects sought with the new rules on eForms, which aim to significantly increase the availability of (better) procurement data for transparency purposes.

Although it is an avenue mainly explored in other jurisdictions, and currently in the US context, it is also possible to create public marketplaces akin to Amazon/eBay/etc to generate a more user-friendly interface for different types of catalogue-based eProcurement systems (see eg this recent piece by Chris Yukins).

Beyond that, the (further) digitalisation of procurement is another strategic priority for the European Commission; not only for procurement’s sake, but also in the context of the wider strategy to create an AI-friendly regulatory environment and to use procurement as a catalyst for innovations of broader application – along lines of the entrepreneurial State (Mazzucato, 2013; see here for an adapted shorter version).

Indeed, the Commission has formulated a bold(er) vision for future procurement systems based on emerging digital technologies, in which it sees a transformative potential: “New technologies provide the possibility to rethink fundamentally the way public procurement, and relevant parts of public administrations, are organised. There is a unique chance to reshape the relevant systems and achieve a digital transformation” (COM(2017) 572 fin at 11).

Even though the Commission has not been explicit, it may be worth trying to map which of the currently emerging digital technologies could be of (more direct) application to procurement governance and practice. Based on the taxonomy included in a recent OECD report (2019a, Annex C), it is possible to identify the following types and specific technologies with potential procurement application:

AI solutions

  • Virtual Assistants (Chat bots or Voice bots): conversational, computer-generated characters that simulate a conversation to deliver voice- or text-based information to a user via a Web, kiosk or mobile interface. A VA incorporates natural-language processing, dialogue control, domain knowledge and a visual appearance (such as photos or animation) that changes according to the content and context of the dialogue. The primary interaction methods are text-to-text, text-to-speech, speech-to-text and speech-to-speech;

  • Natural language processing: technology involves the ability to turn text or audio speech into encoded, structured information, based on an appropriate ontology. The structured data may be used simply to classify a document, as in “this report describes a laparoscopic cholecystectomy,” or it may be used to identify findings, procedures, medications, allergies and participants;

  • Machine Learning: the goal is to devise learning algorithms that do the learning automatically without human intervention or assistance;

  • Deep Learning: allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction;

  • Robotics: deals with the design, construction, operation, and use of robots, as well as computer systems for their control, sensory feedback, and information processing;

  • Recommender systems: subclass of information filtering system that seeks to predict the "rating" or "preference" that a user would give to an item;

  • Expert systems: is a computer system that emulates the decision-making ability of a human expert;

Digital platforms

  • Distributed ledger technology (DLT): is a consensus of replicated, shared, and synchronized digital data geographically spread across multiple sites, countries, or institutions. There is no central administrator or centralised data storage. A peer-to-peer network is required as well as consensus algorithms to ensure replication across nodes is undertaken; Blockchain is one of the most common implementation of DLT;

  • Smart contracts: is a computer protocol intended to digitally facilitate, verify, or enforce the negotiation or performance of a contract;

  • IoT Platform: platform on which to create and manage applications, to run analytics, and to store and secure your data in order to get value from the Internet of Things (IoT);

Not all technologies are equally relevant to procurement—and some of them are interrelated in a manner that requires concurrent development—but these seem to me to be those with a higher potential to support the procurement function in the future. Their development needs not take place solely, or primarily, in the context of procurement. Therefore, their assessment should be carried out in the broader setting of the adoption of digital technologies in the public sector.

4. Digital technologies & the public sector, including procurement

The emergence of the above mentioned digital technologies is now seen as a potential solution to complex public policy problems, such as the promotion of more sustainable public procurement. Keeping track of all the potential use cases in the public sector is difficult and the hype around buzzwords such as AI, blockchain or the internet of things (IoT) generates inflated claims of potential solutions to even some of the most wicked public policy problems (eg corruption).

This is reflective of the same hype in private markets, and in particular in financial and consumer markets, where AI is supposed to revolutionise the way we live, almost beyond recognition. There also seems to be an emerging race to the top (or rather, a copy-cat effect) in policy-making circles, as more and more countries adopt AI strategies in the hope of harnessing the potential of these technologies to boost economic growth.

In my view, digital technologies are receiving excessive attention. These are immature technologies and their likely development and usefulness is difficult to grasp beyond a relatively abstract level of potentiality. As such, I think these technologies may be receiving excessive attention from policy-makers and possibly also disproportionate levels of investment (diversion).

The implementation of digital technologies in the public sector faces a number of specific difficulties—not least, around data availability and data skills, as stressed in a recent OECD report (2019b). While it is probably beyond doubt that they will have an impact on public governance and the delivery of public services, it is more likely to be incremental rather than disruptive or revolutionary. Along these lines, another recent OECD report (2019c) stresses the need to take a critical look at the potential of artificial intelligence, in particular in relation to public sector use cases.

The OECD report (2019a) mentioned above shows how, despite these general strategies and the high levels of support at the top levels of policy-making, there is limited evidence of significant developments on the ground. This is the case, in particular, regarding the implementation of digital technologies in public procurement, where the OECD documents very limited developments (see table below).

Picture 1.png

Of course, this does not mean that we will not see more and more widespread developments in the coming years, but a note of caution is necessary if we are to embrace realistic expectations about the potential for significant changes resulting from procurement digitalisation. The following sections concentrate on the speculative analysis of such potential use of digital technologies to support sustainable procurement.

5. Sustainable digital procurement

Bringing together the scope for more sustainable public procurement (2), the progressive digitalisation of procurement (3), and the emergence of digital technologies susceptible of implementation in the public sector (4); the combined strategic goal (or ideal) would be to harness the potential of digital technologies to promote (more) sustainable procurement. This is a difficult exercise, surrounded by uncertainty, so the rest of this post is all speculation.

In my view, there are different ways in which digital technologies can be used for sustainability purposes. The contribution that each digital technology (DT) can make depends on its core functionality. In simple functional terms, my understanding is that:

  • AI is particularly apt for the massive processing of (big) data, as well as for the implementation of data-based machine learning (ML) solutions and the automation of some tasks (through so-called robotic process automation, RPA);

  • Blockchain is apt for the implementation of tamper-resistant/evident decentralised data management;

  • The internet of things (IoT) is apt to automate the generation of some data and (could be?) apt to breach the virtual/real frontier through oracle-enabled robotics

The timeline that we could expect for the development of these solutions is also highly uncertain, although there are expectations for some technologies to mature within the next four years, whereas others may still take closer to ten years.

© Gartner, Aug 2018.

© Gartner, Aug 2018.

Each of the core functionalities or basic strengths of these digital technologies, as well as their rate of development, will determine a higher or lower likelihood of successful implementation in the area of procurement, which is a highly information/data-sensitive area of public policy and administration. Therefore, it seems unavoidable to first look at the need to create an enabling data architecture as a priority (and pre-condition) to the deployment of any digital technologies.

6. An enabling data architecture as a priority

The importance of the availability of good quality data in the context of digital technologies cannot be over-emphasised (see eg OECD, 2019b). This is also clear to the European Commission, as it has also included the need to improve the availability of good quality data as a strategic priority. Indeed, the Commission stressed that “Better and more accessible data on procurement should be made available as it opens a wide range of opportunities to assess better the performance of procurement policies, optimise the interaction between public procurement systems and shape future strategic decisions” (COM(2017) 572 fin at 10-11).

However, despite the launch of a set of initiatives that seek to improve the existing procurement data architecture, there are still significant difficulties in the generation of data [for discussion and further references, see A Sanchez-Graells, “Data-driven procurement governance: two well-known elephant tales” (2019) 24(4) Communications Law 157-170; idem, “Some public procurement challenges in supporting and delivering smart urban mobility: procurement data, discretion and expertise”, in M Finck, M Lamping, V Moscon & H Richter (eds), Smart Urban Mobility – Law, Regulation, and Policy, MPI Studies on Intellectual Property and Competition Law (Springer 2020) forthcoming; and idem, “EU Public Procurement Policy and the Fourth Industrial Revolution: Pushing and Pulling as One?”, Working Paper for the YEL Annual Conference 2019 ‘EU Law in the era of the Fourth Industrial Revolution’].

To be sure, there are impending advances in the availability of quality procurement data as a result of the increased uptake of the Open Contracting Data Standards (OCDS) developed by the Open Contracting Partnership (OCP); the new rules on eForms; the development of eGovernment Application Programming Interfaces (APIs); the 2019 Open Data Directive; the principles of business to government data sharing (B2G data sharing); etc. However, it seems to me that the European Commission needs to exercise clearer leadership in the development of an EU-wide procurement data architecture. There is, in particular, one measure that could be easily adopted and would make a big difference.

The 2019 Open Data Directive (Directive 2019/1024/EU, ODD) establishes a special regime for high-value datasets, which need to be available free of charge (subject to some exceptions); machine readable; provided via APIs; and provided as a bulk download, where relevant (Art 14(1) ODD). Those high-value datasets are yet to be identified by the European Commission through implementing acts aimed at specifying datasets within a list of thematic categories included in Annex I, which includes the following datasets: geospatial; Earth observation and environment; meteorological; statistics; companies and company ownership; and mobility. In my view, most relevant procurement data can clearly fit within the category of statistical information.

More importantly, the directive specifies that the ‘identification of specific high-value datasets … shall be based on the assessment of their potential to: (a) generate significant socioeconomic or environmental benefits and innovative services; (b) benefit a high number of users, in particular SMEs; (c) assist in generating revenues; and (d) be combined with other datasets’ (Art 14(2) ODD). Given the high-potential of procurement data to unlock (a), (b) and (d), as well as, potentially, generate savings analogous to (c), the inclusion of datasets of procurement information in the future list of high-value datasets for the purposes of the Open Data Directive seems like an obvious choice.

Of course, there will be issues to iron out, as not all procurement information is equally susceptible of generating those advantages and there is the unavoidable need to ensure an appropriate balance between the publication of the data and the protection of legitimate (commercial) interests, as recognised by the Directive itself (Art 2(d)(iii) ODD) [for extended discussion, see here]. However, this would be a good step in the direction of ensuring the creation of a forward-looking data architecture.

At any rate, this is not really a radical idea. At least half of the EU is already publishing some public procurement open data, and many Eastern Partnership countries publish procurement data in OCDS (eg Moldova, Ukraine, Georgia). The suggestion here would bring more order into this bottom-up development and would help Member States understand what is expected, where to get help from, etc, as well as ensure the desirable level of uniformity, interoperability and coordination in the publication of the relevant procurement data.

Beyond that, in my view, more needs to be done to also generate backward-looking databases that enable the public sector to design and implement adequate sustainability policies, eg in relation to the repair and re-use of existing assets.

Only when the adequate data architecture is in place, will it be possible to deploy advanced digital technologies. Therefore, this should be given the highest priority by policy-makers.

7. Potential AI uses for sustainable public procurement

If/when sufficient data is available, there will be scope for the deployment of several specific implementations of artificial intelligence. It is possible to imagine the following potential uses:

  • Sustainability-oriented (big) data analytics: this should be relatively easy to achieve and it would simply be the deployment of big data analytics to monitor the extent to which procurement expenditure is pursuing or achieving specified sustainability goals. This could support the design and implementation of sustainability-oriented procurement policies and, where appropriate, it could generate public disclosure of that information in order to foster civic engagement and to feedback into political processes.

  • Development of sustainability screens/indexes: this would be a slight variation of the former and could facilitate the generation of synthetic data visualisations that reduced the burden of understanding the data analytics.

  • Machine Learning-supported data analysis with sustainability goals: this could aim to train algorithms to establish eg the effectiveness of sustainability-oriented procurement policies and interventions, with the aim of streamlining existing policies and to update them at a pace and level of precision that would be difficult to achieve by other means.

  • Sustainability-oriented procurement planning: this would entail the deployment of algorithms aimed at predictive analytics that could improve procurement planning, in particular to maximise the sustainability impact of future procurements.

Moreover, where clear rules/policies are specified, there will be scope for:

  • Compliance automation: it is possible to structure procurement processes and authorisations in such a way that compliance with pre-specified requirements is ensured (within the eProcurement system). This facilitates ex ante interventions that could minimise the risk of and the need for ex post contractual modifications or tender cancellations.

  • Recommender/expert systems: it would be possible to use machine learning to assist in the design and implementation of procurement processes in a way that supported the public buyer, in an instance of cognitive computing that could accelerate the gains that would otherwise require more significant investments in professionalisation and specialisation of the workforce.

  • Chatbot-enabled guidance: similarly to the two applications above, the use of procurement intelligence could underpin chatbot-enabled systems that supported the public buyers.

A further open question is whether AI could ever autonomously generate new sustainability policies. I dare not engage in such exercise in futurology…

8. Limited use of blockchain/DLTs for sustainable public procurement

Picture 1.png

By contrast with the potential for big data and the AI it can enable, the potential for blockchain applications in the context of procurement seems to me much more limited (for further details, see here, here and here). To put it simply, the core advantages of distributed ledger technologies/blockchain derive from their decentralised structure.

Whereas there are several different potential configurations of DLTs (see eg Rauchs et al, 2019 and Alessie et al, 2019, from where the graph is taken), the configuration of the blockchain affects its functionalities—with the highest levels of functionality being created by open and permissionless blockchains.

However, such a structure is fundamentally uninteresting to the public sector, which is unlikely to give up control over the system. This has been repeatedly stressed and confirmed in an overview of recent implementations (OECD, 2019a:16; see also OECD, 2018).

Moreover, even beyond the issue of public sector control, it should be stressed that existing open and permissionless blockchains operate on the basis of a proof-of-work (PoW) consensus mechanism, which has a very high carbon footprint (in particular in the case of Bitcoin). This also makes such systems inapt for sustainable digital procurement implementations.

Therefore, sustainable blockchain solutions (ie private & permissioned, based on proof-of-stake (PoS) or a similar consensus mechanisms), are likely to present very limited advantages for procurement implementation over advanced systems of database management—and, possibly, even more generally (see eg this interesting critical paper by Low & Mik, 2019).

Moreover, even if there was a way to work around those constraints and design a viable technical solution, that by itself would still not fix underlying procurement policy complexity, which will necessarily impose constraints on technologies that require deterministic coding, eg

  • Tenders on a blockchain - the proposals to use blockchain for the implementation of the tender procedure itself are very limited, in my opinion, by the difficulty in structuring all requirements on the basis of IF/THEN statements (see here).

  • Smart (public) contracts - the same constraints apply to smart contracts (see here and here).

  • Blockchain as an information exchange platform (Mélon, 2019, on file) - the proposals to use blockchain mechanisms to exchange information on best practices and tender documentation of successful projects could serve to address some of the confidentiality issues that could arise with ‘standard’ databases. However, regardless of the technical support to the exchange of information, the complexity in identifying best practices and in ensuring their replicability remains. This is evidenced by the European Commission’s Initiative for the exchange of information on the procurement of Large Infrastructure Projects (discussed here when it was announced), which has not been used at all in its first two years (as of 6 November 2019, there were no publicly-available files in the database).

9. Sustainable procurement of digital technologies

A final issue to take into consideration is that the procurement of digital technologies needs to itself incorporate sustainability considerations. However, this does not seem to be the case in the context of the hype and over-excitement with the experimentation/deployment of those technologies.

Indeed, there are emerging guidelines on procurement of some digital technologies, such as AI (UK, 2019) (WEF, 2019) (see here for discussion). However, as could be expected, these guidelines are extremely technology-centric and their interaction with broader procurement policies is not necessarily straightforward.

I would argue that, in order for these technologies to enable a more sustainable procurement, sustainability considerations need to be embedded not only in their application, but may well require eg an earlier analysis of whether the life-cycle of existing solutions warrants replacement, or the long-term impacts of the implementation of digital technologies (eg in terms of life-cycle carbon footprint).

Pursuing technological development for its own sake can have significant environmental impacts that must be assessed.

10. Concluding thoughts

This (very long…) blog post has structured some of my thoughts on the interaction of sustainability and digitalisation in the context of public procurement. By way of conclusion, I would just try to translate this into priorities for policy-making (and research). Overall, I believe that the main area of effort for policy-makers should now be in creating an enabling data architecture. Its regulation can thus focus research in the short term. In the medium-term, and as use cases become clearer in the policy-making sphere, research should be moving towards the design of digital technology-enabled solutions (for sustainable public procurement, but not only) and their regulation, governance and social impacts. The long-term is too difficult for me to foresee, as there is too much uncertainty. I can only guess that we will cross that bridge when/if we get there…

AI & sustainable procurement: the public sector should first learn what it already owns

ⓒ Christophe Benoit (Flickr).

ⓒ Christophe Benoit (Flickr).

[This post was first published at the University of Bristol Law School Blog on 14 October 2019].

While carrying out research on the impact of digital technologies for public procurement governance, I have realised that the deployment of artificial intelligence to promote sustainability through public procurement holds some promise. There are many ways in which machine learning can contribute to enhance procurement sustainability.

For example, new analytics applied to open transport data can significantly improve procurement planning to support more sustainable urban mobility strategies, as well as the emergence of new models for the procurement of mobility as a service (MaaS). Machine learning can also be used to improve the logistics of public sector supply chains, as well as unlock new models of public ownership of eg cars. It can also support public buyers in identifying the green or sustainable public procurement criteria that will deliver the biggest improvements measured against any chosen key performance indicator, such as CO2 footprint, as well as support the development of robust methodologies for life-cycle costing.

However, it is also evident that artificial intelligence can only be effectively deployed where the public sector has an adequate data architecture. While advances in electronic procurement and digital contract registers are capable of generating that data architecture for the future, there is a significant problem concerning the digitalisation of information on the outcomes of past procurement exercises and the current stock of assets owned and used by the public sector. In this blog, I want to raise awareness about this gap in public sector information and to advocate for the public sector to invest in learning what it already owns as a potential major contribution to sustainability in procurement, in particular given the catalyst effect this could have for a more circular procurement economy.

Backward-looking data as a necessary evidence base

It is notorious that the public sector’s management of procurement-related information is lacking. It is difficult enough to have access to information on ‘live’ tender procedures. Accessing information on contract execution and any contractual modifications has been nigh impossible until the very recent implementation of the increased transparency requirements imposed by the EU’s 2014 Public Procurement Package. Moreover, even where that information can be identified, there are significant constraints on the disclosure of competition-sensitive information or business secrets, which can also restrict access. This can be compounded in the case of procurement of assets subject to outsourced maintenance contracts, or in assets procured under mechanisms that do not transfer property to the public sector.

Accessing information on the outcomes of past procurement exercises is thus a major challenge. Where the information is recorded, it is siloed and compartmentalised. And, in any case, this is not public information and it is oftentimes only held by the private firms that supplied the goods or provided the services—with information on public works more likely to be, at least partially, under public sector control. This raises complex issues of business to government (B2G) data sharing, which is only a nascent area of practice and where the guidance provided by the European Commission in 2018 leaves many questions unanswered.

I will not argue here that all that information should be automatically and unrestrictedly publicly disclosed, as that would require some careful considerations of the implications of such disclosures. However, I submit that the public sector should invest in tracing back information on procurement outcomes for all its existing stock of assets (either owned, or used under other contractual forms)—or, at least, in the main categories of buildings and real estate, transport systems and IT and communications hardware. Such database should then be made available to data scientists tasked with seeking all possible ways of optimising the value of that information for the design of sustainable procurement strategies.

In other words, in my opinion, if the public sector is to take procurement sustainability seriously, it should invest in creating a single, centralised database of the durable assets it owns as the necessary evidence base on which to seek to build more sustainable procurement policies. And it should then put that evidence base to good use.

More circular procurement economy based on existing stocks

In my view, some of the main advantages of creating such a database in the short-, medium- and long-term would be as follows.

In the short term, having comprehensive data on existing public sector assets would allow for the deployment of different machine learning solutions to seek, for example, to identify redundant or obsolete assets that could be reassigned or disposed of, or to reassess the efficiency of the existing investments eg in terms of levels of use and potential for increased sharing of assets, or in terms of the energy (in)efficiency derived from their use. It would also allow for a better understanding of potential additional improvements in eg maintenance strategies, as services could be designed having the entirety of the relevant stock into consideration.

In the medium term, this would also provide better insights on the whole life cycle of the assets used by the public sector, including the possibility of deploying machine learning to plan for timely maintenance and replacement, as well as to improve life cycle costing methodologies based on public-sector specific conditions. It would also facilitate the creation of a ‘public sector second-hand market’, where entities with lower levels of performance requirements could acquire assets no longer fit for their original purpose, eg computers previously used in more advanced tasks that still have sufficient capacity could be repurposed for routine administrative tasks. It would also allow for the planning and design of recycling facilities in ways that minimised the carbon footprint of the disposal.

In the long run, in particular post-disposal, the existence of the database of assets could unlock a more circular procurement economy, as the materials of disposed assets could be reused for the building of other assets. In that regard, there seem to be some quick wins to be had in the construction sector, but having access to more and better information would probably also serve as a catalyst for similar approaches in other sectors.

Conclusion

Building a database on existing public sector-used assets as the outcome of earlier procurement exercises is not an easy or cheap task. However, in my view, it would have transformative potential and could generate sustainability gains not only aimed at reducing the carbon footprint of future public expenditure but, more importantly, at correcting or somehow compensating for the current environmental impacts of the way the public sector operates. This could make a major difference in accelerating emissions reductions and should consequently be a matter of sufficient priority for the public sector to engage in this exercise. In my view, it should be a matter of high priority.

Reflecting on data-driven and digital procurement governance through two elephant tales

Elephants in a 13th century manuscript. THE BRITISH LIBRARY/ROYAL 12 F XIII

Elephants in a 13th century manuscript. THE BRITISH LIBRARY/ROYAL 12 F XIII

I have uploaded to SSRN the new paper ‘Data-driven and digital procurement governance: Revisiting two well-known elephant tales‘ (21 Aug 2019), which I will present at the Annual Conference of the IALS Information Law & Policy Centre on 22 November 2019.

The paper condenses my current thoughts about the obstacles for the deployment of data-driven digital procurement governance due to a lack of reliable quality procurement data sources, as well as my skepticism about the potential for blockchain-based solutions, including smart contracts, to have a significant impact in public procurement setting where the public buyer is extremely unlikely to give up centralised control of the procurement function. The abstract of the paper is as follows:

This paper takes the dearth of quality procurement data as an empirical point of departure to assess emerging regulatory trends in data-driven and digital public procurement governance and, in particular, the European Commission’s ambition for the single digital procurement market. It resorts to two well-known elephant tales to send a message of caution. It first appeals to the image of medieval bestiary elephants to stress the need to develop a better data architecture that reveals the real state of the procurement landscape, and for the European Commission to stop relying on bad data in the Single Market Scoreboard. The paper then assesses the promises of blockchain and smart contracts for procurement governance and raises the prospect that these may be new white elephants that do not offer significant advantages over existing sophisticated databases, or beyond narrow back-office applications—which leaves a number of unanswered questions regarding the desirability of their implementation. The paper concludes by advocating for EU policymakers to concentrate on developing an adequate data architecture to enable digital procurement governance.

If nothing else, I hope the two elephant tales are convincing.

Legal text analytics: some thoughts on where (I think) things stand

Researching the area of artificial intelligence and the law (AI & Law) has currently taken me to the complexities of natural language processing (NLP) applied to legal texts (aka legal text analytics). Trying to understand the extent to which AI can be used to perform automated legal analysis—or, more modestly, to support humans in performing legal analysis—requires (at least) a view of the current possibilities for AI tools to (i) extract information from legal sources (or ‘understand’ them and their relationships), (ii) assess their relevance to a given legal problem and (iii) apply the legal source to provide a legal solution to the problem (or to suggest one for human validation).

Of course, this obviates other issues such as the need for AI to be able to understand the factual situation to formulate the relevant legal problem, to assess or rank different legal solutions where available, or take into account additional aspects such as the likelihood of obtaining a remedy, etc—all of which could be tackled by fields of AI & Law different from legal text analytics. The above also ignores other aspects of ‘understanding’ documents, such as the ability for an algorithm to distinguish factual and legal issues within a legal document (ie a judgment) or to extract basic descriptive information (eg being able to create a citation based on the information in the judgment, or to cluster different types of provisions within a contract or across contracts)—some of which seems to be at hand or soon to be developed on the basis of the recently released Google ‘Document Understanding AI’ tool.

The latest issue of Artificial Intelligence and the Law luckily concentrates on ‘Natural Language Processing for Legal Texts’ and offers some help in trying to understand where things currently stand regarding issues (i) and (ii) above. In this post, I offer some reflections based on my understanding of two of the papers included in the special issue: Nanda et al (2019) and Chalkidis & Kampas (2019). I may have gotten the specific technical details wrong (although I hope not), but I think I got the functional insights.

Establishing relationships between legal sources

One of the problems that legal text analytics is trying to solve concerns establishing relationships between different legal sources—which can be a partial aspect of the need to ‘understand’ them (issue (i) above). This is the main problem discussed in Nanda et al, 'Unsupervised and supervised text similarity systems for automated identification of national implementing measures of European directives' (2019) 27(2) Artificial Intelligence and Law 199-225. In this piece of research, AI is used to establish whether a provision of a national implementing measure (NIM) transposes a specific article of an EU Directive or not. In extremely simplified terms, the researchers train different algorithms to perform text comparison. The researchers work on a closed list of 43 EU Directives and the corresponding Luxembuorgian, Irish and Italian NIMs. The following table plots their results.

Nanda et al (2019: 208, Figure 6).

The table shows that the best AI solution developed by the researchers (the TF-IDF cosine) achieves levels of precision of around 83% for Luxembourg, 77% for Italy and 68% for Ireland. These seem like rather impressive results but a qualitative analysis of their experiment indicates that the significantly better performance for Luxembourgian transposition over Italian or Irish transposition likely results from the fact that Luxembourg tends to largely ‘copy & paste’ EU Directives into national law, whereas the Italian and Irish legislators adopt a more complex approach to the integration of EU rules into their existing legal instruments.

Moreover, it should be noted that the algorithms are working on a very specific issue, as they are only assessing the correspondence between provisions of EU and NIM instruments that were related—that is, they are operating in a closed or walled dataset that does not include NIMs that do not transpose any of the 43 chosen Directives. Once these aspects of the research design are taken into account, there are a number of unanswered questions, such as the precision that the algorithms would have if they had to compare entire NIMs against an open-ended list of EU Directives, or if they were used to screen for transposition rules. While the first issue could probably be answered simply extending the experiment, the second issue would probably require a different type of AI design.

On the whole, my impression after reading this interesting piece of research is that AI is still relatively far from a situation where it can provide reliable answers to the issue of establishing relationships across legal sources, particularly if one thinks of relatively more complex relationships than transposition within the EU context, such as development, modification or repeal of a given set of rules by other (potentially dispersed) rules.

Establishing relationships between legal problems and legal sources

A separate but related issue requires AI to identify legal sources that could be relevant to solve a specific legal problem (issue (ii) above)—that is, the relevant relationship is not across legal sources (as above), but between a legal problem or question and relevant legal sources.

This is covered in part of the literature review included in Chalkidis & Kampas, ‘Deep learning in law: early adaptation and legal word embeddings trained on large corpora‘ (2019) 27(2) Artificial Intelligence and Law 171-198 (see esp 188-194), where they discuss some of the solutions given to the task of the Competition on Legal Information Extraction/Entailment (COLIEE) from 2014 to 2017, which focused ‘on two aspects related to a binary (yes/no) question answering as follows: Phase one of the legal question answering task involves reading a question Q and extract[ing] the legal articles of the Civil Code that are relevant to the question. In phase two the systems should return a yes or no answer if the retrieved articles from phase one entail or not the question Q’.

The paper covers four different attempts at solving the task. It reports that the AI solutions developed to address the two binary questions achieved the following levels of precision: 66.67% (Morimoto et al. (2017)); 63.87% (Kim et al. (2015)); 57.6% (Do et al. (2017)); 53.8% (Nanda et al. (2017)). Once again, these results are rather impressive but some contextualisation may help to assess the extent to which this can be useful in legal practice.

The best AI solution was able to identify relevant provisions that entailed the relevant question 2 out of 3 times. However, the algorithms were once again working on a closed or walled field because they solely had to search for relevant provisions in the Civil Code. One can thus wonder whether algorithms confronted with the entirety of a legal order would be able to reach even close degrees of accuracy.

Some thoughts

Based on the current state of legal text analytics (as far as I can see it), it seems clear that AI is far from being able to perform independent/unsupervised legal analysis and provide automated solutions to legal problems (issue (iii) above) because there are still very significant shortcomings concerning issues of ‘understanding’ natural language legal texts (issue (i)) and adequately relating them to specific legal problems (issue (ii)). That should not be surprising.

However, what also seems clear is that AI is very far from being able to confront the vastness of a legal order and that, much as lawyers themselves, AI tools need to specialise and operate within the narrower boundaries of sub-domains or quite contained legal fields. When that is the case, AI can achieve much higher degrees of precision—see examples of information extraction precision above 90% in Chalkidis & Kampas (2019: 194-196) in projects concerning Chinese credit fraud judgments and Canadian immigration rules.

Therefore, the current state of legal text analytics seems to indicate that AI is (quickly?) reaching a point where algorithms can be used to extract legal information from natural language text sources within a specified legal field (which needs to be established through adequate supervision) in a way that allows it to provide fallible or incomplete lists of potentially relevant rules or materials for a given legal issue. However, this still requires legal experts to complement the relevant searches (to bridge any gaps) and to screen the proposed materials for actual relevance. In that regard, AI does hold the promise of much better results than previous expert systems and information retrieval systems and, where adequately trained, it can support and potentially improve legal research (ie cognitive computing, along the lines developed by Ashley (2017)). However, in my view, there are extremely limited prospects for ‘independent functionality’ of legaltech solutions. I would happily hear arguments to the contrary, though!

Procurement governance and complex technologies: a promising future?

Thanks to the UK’s Procurement Lawyers’ Association (PLA) and in particular Totis Kotsonis, on Wednesday 6 March 2019, I will have the opportunity to present some of my initial thoughts on the potential impact of complex technologies on procurement governance.

In the presentation, I will aim to critically assess the impacts that complex technologies such as blockchain (or smart contracts), artificial intelligence (including big data) and the internet of things could have for public procurement governance and oversight. Taking the main risks of maladministration of the procurement function (corruption, discrimination and inefficiency) on which procurement law is based as the analytical point of departure, the talk will explore the potential improvements of governance that different complex technologies could bring, as well as any new governance risks that they could also generate.

The slides I will use are at the end of this post. Unfortunately, the hyperlinks do not work, so please email me if you are interested in a fully-accessible presentation format (a.sanchez-graells@bristol.ac.uk).

The event is open to non-PLA members. So if you are in London and fancy joining the conversation, please register following the instructions in the PLA’s event page.