3 priorities for policy-makers thinking of AI and machine learning for procurement governance

138369750_9f3b5989f9_w.jpg

I find that carrying out research in the digital technologies and governance field can be overwhelming. And that is for an academic currently having the luxury of full-time research leave… so I can only imagine how much more overwhelming it must be for policy-makers thinking about the adoption of artificial intelligence (AI) and machine learning for procurement governance, to identify potential use cases and to establish viable deployment strategies.

Prioritisation seems particularly complicated, as managing such a significant change requires careful planning and paying attention to a wide variety of potential issues. However, getting prioritisation right is probably the best way of increasing the chances of success for the deployment of digital technologies for procurement governance — as well as in other areas of Regtech, such as financial supervision.

This interesting speech by James Proudman (Executive Director of UK Deposit Takers Supervision, Bank of England) on 'Managing Machines: the governance of artificial intelligence', precisely focuses on such issues. And I find the conclusions particularly enlightening:

First, the observation that the introduction of AI/ML poses significant challenges around the proper use of data, suggests that boards should attach priority to the governance of data – what data should be used; how should it be modelled and tested; and whether the outcomes derived from the data are correct.

Second, the observation that the introduction of AI/ML does not eliminate the role of human incentives in delivering good or bad outcomes, but transforms them, implies that boards should continue to focus on the oversight of human incentives and accountabilities within AI/ML-centric systems.

And third, the acceleration in the rate of introduction of AI/ML will create increased execution risks during the transition that need to be overseen. Boards should reflect on the range of skill sets and controls that are required to mitigate these risks both at senior level and throughout the organisation.

These seem to me directly transferable to the context of procurement governance and the design of strategies for the deployment of AI and machine learning, as well as other digital technologies.

First, it is necessary to create an enabling data architecture and to put significant thought into how to extract value from the increasingly available data. In that regard, there are two opportunities that should not be missed. One concerns the treatment of procurement datasets as high-value datasets for the purposes of the special regime of the Open Data Directive (for more details, see section 6 here), which will require careful consideration of the content and level of openness of procurement data in the context of the domestic transpositions that need to be in place by 17 July 2021. The other, related opportunity concerns the implementation of the new rules on eForms for procurement data publications, which Member States need to adopt by 14 November 2022. Building on the data architecture that will result from both sets of changes—which should be coordinated—will allow for the deployment of data analytics and machine learning techniques. The purposes and goals of such deployments also need to be considered carefully, as well as their potential implications.

Second, it seems clear that the changes in the management of procurement data and the quick development of analytics that can support procurement decision-making pile some additional training and upskilling needs on the already existing (and partially unaddressed?) current challenges of full consolidation of eProcurement across the EU. Moreover, it should be clear that there is no such thing as an objective and value neutral implementation of technological governance solutions and that all levels of accountability need to be provided with adequate data skills and digital literacy upgrades in order to check what is being done at the technical level (for crystal-clear discussion, see van der Voort et al, 'Rationality and politics of algorithms. Will the promise of big data survive the dynamics of public decision making?' (2019) 36(1) Government Information Quarterly 27-38). Otherwise, governance mechanism would be at risk of failure due to techno-capture and/or techno-blindness, whether intended or accidental.

Third, there is an increasing need to manage change and the risks that come with it. In a notoriously risk averse policy field such as procurement, this is no minor challenge. This should also prompt some rethinking of the way the procurement function is organised and its risk-management mechanisms.

Addressing these priorities will not be easy or cheap, but these are the fundamental building blocks required to enable the public procurement sector to benefit from the benefits of digital technologies as they mature. In consultancy jargon, these are the priorities to ‘future-proof’ procurement strategies. Will they be adopted?

Postscript

It is worth adding that, in particular the first and second issues, lend themselves to strong collaborations between policy-makers and academics. As rightly pointed out by Pencheva et al, 'Big Data and AI – A transformational shift for government: So, what next for research?' (2018) Public Policy and Administration, advanced access at 16:

... governments should also support the efforts for knowledge creation and analysis by opening up their data further, collaborating with – and actively seeking inputs from – researchers to understand how Big Data can be utilised in the public sector. Ultimately, the supporting field of academic thought will only be as strong as the public administration practice allows it to be.