Digital procurement governance: drawing a feasibility boundary

In the current context of generalised quick adoption of digital technologies across the public sector and strategic steers to accelerate the digitalisation of public procurement, decision-makers can be captured by techno hype and the ‘policy irresistibility’ that can ensue from it (as discussed in detail here, as well as here).

To moderate those pressures and guide experimentation towards the successful deployment of digital solutions, decision-makers must reassess the realistic potential of those technologies in the specific context of procurement governance. They must also consider which enabling factors must be put in place to harness the potential of the digital technologies—which primarily relate to an enabling big data architecture (see here). Combined, the data requirements and the contextualised potential of the technologies will help decision-makers draw a feasibility boundary for digital procurement governance, which should inform their decisions.

In a new draft chapter (num 7) for my book project, I draw such a technology-informed feasibility boundary for digital procurement governance. This post provides a summary of my main findings, on which I will welcome any comments: a.sanchez-graells@bristol.ac.uk. The full draft chapter is free to download: A Sanchez-Graells, ‘Revisiting the promise: A feasibility boundary for digital procurement governance’ to be included in A Sanchez-Graells, Digital Technologies and Public Procurement. Gatekeeping and experimentation in digital public governance (OUP, forthcoming). Available at SSRN: https://ssrn.com/abstract=4232973.

Data as the main constraint

It will hardly be surprising to stress again that high quality big data is a pre-requisite for the development and deployment of digital technologies. All digital technologies of potential adoption in procurement governance are data-dependent. Therefore, without adequate data, there is no prospect of successful adoption of the technologies. The difficulties in generating an enabling procurement data architecture are detailed here.

Moreover, new data rules only regulate the capture of data for the future. This means that it will take time for big data to accumulate. Accessing historical data would be a way of building up (big) data and speeding up the development of digital solutions. Moreover, in some contexts, such as in relation with very infrequent types of procurement, or in relation to decisions concerning previous investments and acquisitions, historical data will be particularly relevant (eg to deploy green policies seeking to extend the use life of current assets through programmes of enhanced maintenance or refurbishment; see here). However, there are significant challenges linked to the creation of backward-looking digital databases, not only relating to the cost of digitisation of the information, but also to technical difficulties in ensuring the representativity and adequate labelling of pre-existing information.

An additional issue to consider is that a number of governance-relevant insights can only be extracted from a combination of procurement and other types of data. This can include sources of data on potential conflict of interest (eg family relations, or financial circumstances of individuals involved in decision-making), information on corporate activities and offerings, including detailed information on products, services and means of production (eg in relation with licensing or testing schemes), or information on levels of utilisation of public contracts and satisfaction with the outcomes by those meant to benefit from their implementation (eg users of a public service, or ‘internal’ users within the public administration).

To the extent that the outside sources of information are not digitised, or not in a way that is (easily) compatible or linkable with procurement information, some data-based procurement governance solutions will remain undeliverable. Some developments in digital procurement governance will thus be determined by progress in other policy areas. While there are initiatives to promote the availability of data in those settings (eg the EU’s Data Governance Act, the Guidelines on private sector data sharing, or the Open Data Directive), the voluntariness of many of those mechanisms raises important questions on the likely availability of data required to develop digital solutions.

Overall, there is no guarantee that the data required for the development of some (advanced) digital solutions will be available. A careful analysis of data requirements must thus be a point of concentration for any decision-maker from the very early stages of considering digitalisation projects.

Revised potential of selected digital technologies

Once (or rather, if) that major data hurdle is cleared, the possibilities realistically brought by the functionality of digital technologies need to be embedded in the procurement governance context, which results in the following feasibility boundary for the adoption of those technologies.

Robotic Process Automation (RPA)

RPA can reduce the administrative costs of managing pre-existing digitised and highly structured information in the context of entirely standardised and repetitive phases of the procurement process. RPA can reduce the time invested in gathering and cross-checking information and can thus serve as a basic element of decision-making support. However, RPA cannot increase the volume and type of information being considered (other than in cases where some available information was not being taken into consideration due to eg administrative capacity constraints), and it can hardly be successfully deployed in relation to open-ended or potentially contradictory information points. RPA will also not change or improve the processes themselves (unless they are redesigned with a view to deploying RPA).

This generates a clear feasibility boundary for RPA deployment, which will generally have as its purpose the optimisation of the time available to the procurement workforce to engage in information analysis rather than information sourcing and basic checks. While this can clearly bring operational advantages, it will hardly transform procurement governance.

Machine Learning (ML)

Developing ML solutions will pose major challenges, not only in relation to the underlying data architecture (as above), but also in relation to specific regulatory and governance requirements specific to public procurement. Where the operational management of procurement does not diverge from the equivalent function in the (less regulated) private sector, it will be possible to see the adoption or adaptation of similar ML solutions (eg in relation to category spend management). However, where there are regulatory constraints on the conduct of procurement, the development of ML solutions will be challenging.

For example, the need to ensure the openness and technical neutrality of procurement procedures will limit the possibilities of developing recommender systems other than in pre-procured closed lists or environments based on framework agreements or dynamic purchasing systems underpinned by electronic catalogues. Similarly, the intended use of the recommender system may raise significant legal issues concerning eg the exercise of discretion, which can limit their deployment to areas of information exchange or to merely suggestion-based tasks that could hardly replace current processes and procedures. Given the limited utility (or acceptability) of collective filtering recommender solutions (which is the predominant type in consumer-facing private sector uses, such as Netflix or Amazon), there are also constraints on the generality of content-based recommender systems for procurement applications, both at tenderer and at product/service level. This raises a further feasibility issue, as the functional need to develop a multiplicity of different recommenders not only reopens the issue of data sufficiency and adequacy, but also raises questions of (economic and technical) viability. Recommender systems would mostly only be susceptible of feasible adoption in highly centralised procurement settings. This could create a push for further procurement centralisation that is not neutral from a governance perspective, and that can certainly generate significant competition issues of a similar nature, but perhaps a different order of magnitude, than procurement centralisation in a less digitally advanced setting. This should be carefully considered, as the knock-on effects of the implementation of some ML solutions may only emerge down the line.

Similarly, the development and deployment of chatbots is constrained by specific regulatory issues, such as the need to deploy closed domain chatbots (as opposed to open domain chatbots, ie chatbots connected to the Internet, such as virtual assistants built into smartphones), so that the information they draw from can be controlled and quality assured in line with duties of good administration and other legal requirements concerning the provision of information within tender procedures. Chatbots are suited to types of high-volume information-based queries only. They would have limited applicability in relation to the specific characteristics of any given procurement procedure, as preparing the specific information to be used by the chatbot would be a challenge—with the added functionality of the chatbot being marginal. Chatbots could facilitate access to pre-existing and curated simple information, but their functionality would quickly hit a ceiling as the complexity of the information progressed. Chatbots would only be able to perform at a higher level if they were plugged to a knowledge base created as an expert system. But then, again, in that case their added functionality would be marginal. Ultimately, the practical space for the development of chatbots is limited to low added value information access tasks. Again, while this can clearly bring operational advantages, it will hardly transform procurement governance.

ML could facilitate the development and deployment of ‘advanced’ automated screens, or red flags, which could identify patterns of suspicious behaviour to then be assessed against the applicable rules (eg administrative and criminal law in case of corruption, or competition law, potentially including criminal law, in case of bid rigging) or policies (eg in relation to policy requirements to comply with specific targets in relation to a broad variety of goals). The trade off in this type of implementation is between the potential (accuracy) of the algorithmic screening and legal requirements on the explainability of decision-making (as discussed in detail here). Where the screens were not used solely for policy analysis, but acting on the red flag carried legal consequences (eg fines, or even criminal sanctions), the suitability of specific types of ML solutions (eg unsupervised learning solutions tantamount to a ‘black box’) would be doubtful, challenging, or altogether excluded. In any case, the development of ML screens capable of significantly improving over RPA-based automation of current screens is particularly dependent on the existence of adequate data, which is still proving an insurmountable hurdle in many an intended implementation (as above).

Distributed ledger technology (DLT) systems and smart contracts

Other procurement governance constraints limit the prospects of wholesale adoption of DLT (or blockchain) technologies, other than for relatively limited information management purposes. The public sector can hardly be expected to adopt DLT solutions that are not heavily permissioned, and that do not include significant safeguards to protect sensitive, commercially valuable, and other types of information that cannot be simply put in the public domain. This means that the public sector is only likely to implement highly centralised DLT solutions, with the public sector granting permissions to access and amend the relevant information. While this can still generate some (degrees of) tamper-evidence and permanence of the information management system, the net advantage is likely to be modest when compared to other types of secure information management systems. This can have an important bearing on decisions whether DLT solutions meet cost effectiveness or similar criteria of value for money controlling their piloting and deployment.

The value proposition of DLT solutions could increase if they enabled significant procurement automation through smart contracts. However, there are massive challenges in translating procurement procedures to a strict ‘if/when ... then’ programmable logic, smart contracts have limited capability that is not commensurate with the volumes and complexity of procurement information, and their development would only be justified in contexts where a given smart contract (ie specific programme) could be used in a high number of procurement procedures. This limits its scope of applicability to standardised and simple procurement exercises, which creates a functional overlap with some RPA solutions. Even in those settings, smart contracts would pose structural problems in terms of their irrevocability or automaticity. Moreover, they would be unable to generate off-chain effects, and this would not be easily sorted out even with the inclusion of internet of things (IoT) solutions or software oracles. This comes to largely restrict smart contracts to an information exchange mechanism, which does not significantly increase the value added by DLT plus smart contract solutions for procurement governance.

Conclusion

To conclude, there are significant and difficult to solve hurdles in generating an enabling data architecture, especially for digital technologies that require multiple sources of information or data points regarding several phases of the procurement process. Moreover, the realistic potential of most technologies primarily concerns the automation of tasks not involving data analysis of the exercise of procurement discretion, but rather relatively simple information cross-checks or exchanges. Linking back to the discussion in the earlier broader chapter (see here), the analysis above shows that a feasibility boundary emerges whereby the adoption of digital technologies for procurement governance can make contributions in relation to its information intensity, but not easily in relation to its information complexity, at least not in the short to medium term and not in the absence of a significant improvement of the required enabling data architecture. Perhaps in more direct terms, in the absence of a significant expansion in the collection and curation of data, digital technologies can allow procurement governance to do more of the same or to do it quicker, but it cannot enable better procurement driven by data insights, except in relatively narrow settings. Such settings are characterised by centralisation. Therefore, the deployment of digital technologies can be a further source of pressure towards procurement centralisation, which is not a neutral development in governance terms.

This feasibility boundary should be taken into account in considering potential use cases, as well as serve to moderate the expectations that come with the technologies and that can fuel ‘policy irresistibility’. Further, it should be stressed that those potential advantages do not come without their own additional complexities in terms of new governance risks (eg data and data systems integrity, cybersecurity, skills gaps) and requirements for their mitigation. These will be explored in the next stage of my research project.

Procurement recommenders: a response by the author (García Rodríguez)

It has been refreshing to receive a detailed response by the lead author of one of the papers I recently discussed in the blog (see here). Big thanks to Manuel García Rodríguez for following up and for frank and constructive engagement. His full comments are below. I think they will help round up the discussion on the potential, constraints and data-dependency of procurement recommender systems.

Thank you Prof. Sánchez Graells for your comments, it has been a rewarding reading. Below I present my point of view to continue taking an in-depth look about the topic.

Regarding the percentage of success of the recommender, a major initial consideration is that the recommender is generic. It means, it is not restricted to a type of contract, CPV codes, geographical area, etc. It is a recommender that uses all types of Spanish tenders, from any CPV and over 6 years (see table 3). This greatly influences the percentage of success because it is the most difficult scenario. An easier scenario would have restricted the browser to certain geographic areas or CPVs, for example. In addition, 102,000 tenders have been used to this study and, presumably, they are not enough for a search engine which learn business behaviour patterns from historical tenders (more tenders could not be used due to poor data quality).

Regarding the comment that ‘the recommender is an effective tool for society because it enables and increases the bidders participation in tenders with less effort and resources‘. With this phrase we mean that the Administration can have an assistant to encourage participation (in the tenders which are negotiations with or without prior notice) or, even, in which the civil servants actively search for companies and inform those companies directly. I do not know if the public contracting laws of the European countries allow to search for actively and inform directly but it would be the most efficient and reasonable. On the other hand, a good recommender (one that has a high percentage of accuracy) can be an analytical tool to evaluate the level of competition by the contracting authorities. That is, if the tenders of a contracting authority attract very little competition but the recommender finds many potential participating companies, it means that the contracting authority can make its tenders more attractive for the market.

Regarding the comment that “It is also notable that the information of the Companies Register is itself not (and probably cannot be, period) checked or validated, despite the fact that most of it is simply based on self-declarations.” The information in the Spanish Business Register are the annual accounts of the companies, audited by an external entity. I do not know the auditing laws of the different countries. Therefore, I think that the reliability of the data is quite high in our article.

Regarding the first problematic aspect that you indicate: “The first one is that the recommender seems by design incapable of comparing the functional capabilities of companies with very different structural characteristics, unless the parameters for the filtering are given such range that the basket of recommendations approaches four digits”. There will always be the difficulty of comparing companies and defining when they are similar. That analysis should be done by economists, engineers can contribute little. There is also the limitation of business data, the information of the Business Register is usually paywalled and limited to certain fields, as is the case with the Spanish Business Registry. For these reasons, we recognise in the article that it is a basic approach, and it should be modified the filters/rules in the future: “Creating this profile to search similar companies is a very complex issue, which has been simplified. For this reason, the searching phase (3) has basic filters or rules. Moreover, it is possible to modify or add other filters according to the available company dataset used in the aggregation phase”.

Regarding the second problematic aspect that you indicate: “The second issue is that a recommender such as this one seems quite vulnerable to the risk of perpetuating and exacerbating incumbency advantages, and/or of consolidating geographical market fragmentation (given the importance of eg distance, which cannot generate the expected impact on eg costs in all industries, and can increasingly be entirely irrelevant in the context of digital/remote delivery).” This will not happen in the medium and long term because the recommender will adapt to market conditions. If there are companies that win bids far away, the algorithm will include that new distance range in its search. It will always be based on the historical winner companies (and the rest of the bidders if we have that information). You cannot ask a machine learning algorithm (the one used in this article) to make predictions not based on the previous winners and historical market patterns.

I totally agree with your final comment: “It would in my view be preferable to start by designing the recommender system in a way that makes theoretical sense and then make sure that the required data architecture exists or is created.” Unfortunately, I did not find any articles that discuss this topic. Lawyers, economists and engineers must work together to propose solid architectures. In this article we want to convince stakeholders that it is possible to create software tools such as a bidder recommender and the importance of public procurement data and the company’s data in the Business Registers for its development.

Thank you for your critical review. Different approaches are needed to improve on the important topic of public procurement.

Procurement recommender systems: how much better before we trust them? -- re García Rodríguez et al (2020)

© jm3 on Flickr.

How great would it be for a public buyer if an algorithm could identify the likely best bidder/s for a contract it sought to award? Pretty great, agreed.

For example, it would allow targeted advertising or engagement of public procurement opportunities to make sure those ‘best suited’ bidders came forward, or to start negotiations where this is allowed. It could also enable oversight bodies, such as competition authorities, to screen for odd (anti)competitive situations where well-placed providers did not bid for the contract, or only did in worse than expected conditions. If the algorithm was flipped, it would also allow potential bidders to assess for which tenders they are particularly well suited (or not).

It is thus not surprising that there are commercial attempts being developed (eg here) and interesting research going on trying to develop such recommender systems—which, at root, work similarly to recommender systems used in e-commerce (Amazon) or digital content platforms (Netflix, Spotify), in the sense that they try to establish which of the potential providers are most likely to satisfy user needs.

An interesting paper

On this issue, on which there has been some research for at least a decade (see here), I found this paper interesting: García Rodríguez et al, ‘Bidders Recommender for Public Procurement Auctions Using Machine Learning: Data Analysis, Algorithm, and Case Study with Tenders from Spain’ (2020) Complexity Art 8858258.

The paper is interesting in the way it builds the recommender system. It follows three steps. First, an algorithm trained on past tenders is used to predict the winning bidder for a new tender, given some specific attributes of the contract to be awarded. Second, the predicted winning bidder is matched with its data in the Companies Register, so that a number of financial, workforce, technical and location attributes are linked to the prediction. Third and final, the recommender system is used to identify companies similar to the predicted winner. Such identification is based on similarities with the added attributes of the predicted winner, which are subject to some basic filters or rules. In other words, the comparison is carried out at supplier level, not directly in relation to the object of the contract.

Importantly, such filters to sieve through the comparison need to be given numerical values and that is done manually (i.e. set at rather random thresholds, which in relation to some categories, such as technical specialism, make little intuitive sense). This would in principle allow the user of the recommender system to tailor the parameters of the search for recommended bidders.

In the specific case study developed in the paper, the filters are:

  • Economic resources to finance the project (i.e. operating income, EBIT and EBITDA);

  • Human resources to do the work (i.e. number of employees):

  • Specialised work which the company can do (based on code classification: NACE2, IAE, SIC, and NAICS); and

  • Geographical distance between the company’s location and the tender’s location.

Notably, in the case study, distance ‘is a fundamental parameter. Intuitively, the proximity has business benefits such as lower costs’ (at 8).

The key accuracy metric for the recommender system is whether it is capable of identifying the actual winner of a contract as the likely winning bidder or, failing that, whether it is capable of including the actual winner within a basket of recommended bidders. Based on the available Spanish data, the performance of the recommender system is rather meagre.

The poor results can be seen in the two scenarios developed in the paper. In scenario 1, the training and test data are split 80:20 and the 20% is selected randomly. In scenario 2, the data is also split 80:20, but the 20% test data is the most recent one. As the paper stresses, ‘the second scenario is more appropriate to test a real engine search’ (at 13), in particular because the use of the recommender will always be ‘for the next tender’ after the last one included in the relevant dataset.

For that more realistic scenario 2, the recommender has an accuracy of 10.25% in correctly identifying the actual winner, and this only raises to 23.12% if the recommendation includes a basket of five companies. Even for the more detached from reality scenario 1, the accuracy of a single prediction is only 17.07%, and this goes up to 31.58% for 5-company recommendations. The most accurate performance with larger baskets of recommended companies only reaches 38.52% in scenario 1, and 30.52% in scenario 2, although the much larger number of recommended companies (approximating 1,000) also massively dilutes the value of the information.

Comments

So, with the available information, the best performance of the recommender system creates about 1 in 10 chances of correctly identifying the most suitable provider, or 1 in 5 chances of having it included in a basket of 5 recommendations. Put the other way, the best performance of the realistic recommender is that it fails to identify the actual winner for a tender 9 out of 10 times, and it still fails 4 out of 5 times when it is given five chances.

I cannot say how this compares with non-automated searches based on looking at relevant company directories, other sources of industry intelligence or even the anecdotal experience of the public buyer, but these levels of accuracy could hardly justify the adoption of the recommender.

In that regard, the optimistic conclusion of the paper (‘the recommender is an effective tool for society because it enables and increases the bidders participation in tenders with less effort and resources‘ at 17) is a little surprising.

The discussion of the limitations of the recommender system sheds some more light:

The main limitation of this research is inherent to the design of the recommender’s algorithm because it necessarily assumes that winning companies will behave as they behaved in the past. Companies and the market are living entities which are continuously changing. On the other hand, only the identity of the winning company is known in the Spanish tender dataset, not the rest of the bidders. Moreover, the fields of the company’s dataset are very limited. Therefore, there is little knowledge about the profile of other companies which applied for the tender. Maybe in other countries the rest of the bidders are known. It would be easy to adapt the bidder recommender to this more favourable situation (at 17).

The issue of the difficulty of capturing dynamic behaviour is well put. However, there are more problems (below) and the issue of disclosure of other participants in the tender is not straightforwardly to the benefit of a more accurate recommender system, unless there was not only disclosure of other bidders but also of the full evaluations of their tenders, which is an unlikely scenario in practice.

There is also the unaddressed issue of whether it makes sense to compare the specific attributes selected in the study, which it mostly does not, but is driven by the available data.

What is ultimately clear from the paper is that the data required for the development of a useful recommender is simply not there, either at all or with sufficient quality.

For example, it is notable that due to data quality issues, the database of past tenders shrinks from 612,090 recorded to 110,987 useable tenders, which further shrink to 102,087 due to further quality issues in matching the tender information with the Companies Register.

It is also notable that the information of the Companies Register is itself not (and probably cannot be, period) checked or validated, despite the fact that most of it is simply based on self-declarations. There is also an issue with the lag with which information is included and updated in the Companies Register—e.g. under Spanish law, company accounts for 2021 will only have to be registered over the summer of 2022, which means that a use of the recommender in late 2022 would be relying on information that is already a year old (as the paper itself hints, at 14).

And I also have the inkling that recommender systems such as this one would be problematic in at least two aspects, even if all the necessary data was available.

The first one is that the recommender seems by design incapable of comparing the functional capabilities of companies with very different structural characteristics, unless the parameters for the filtering are given such range that the basket of recommendations approaches four digits. For example, even if two companies were the closest ones in terms of their specialist technical competence (even if captured only by the very coarse and in themselves problematic codes used in the model)—which seems to be the best proxy for identifying suitability to satisfy the functional needs of the public buyer—they could significantly differ in everything else, especially if one of them is a start-up. Whether the recommender would put both in the same basket (of a useful size) is an empirical question, but it seems extremely unlikely.

The second issue is that a recommender such as this one seems quite vulnerable to the risk of perpetuating and exacerbating incumbency advantages, and/or of consolidating geographical market fragmentation (given the importance of eg distance, which cannot generate the expected impact on eg costs in all industries, and can increasingly be entirely irrelevant in the context of digital/remote delivery).

So, all in all, it seems like the development of recommender systems needs to be flipped on its head if data availability is driving design. It would in my view be preferable to start by designing the recommender system in a way that makes theoretical sense and then make sure that the required data architecture exists or is created. Otherwise, the adoption of suboptimal recommender systems would not only likely generate significant issues of technical debt (for a thorough warning, see Sculley et al, ‘Hidden Technical Debt in Machine Learning Systems‘ (2015)), but also risk significantly worsening the quality (and effectiveness) of procurement decision-making. And any poor implementation in ‘real life’ would deal a sever blow to the prospects of sustainable adoption of digital technologies to support procurement decision-making.

The 'NHS Food Scanner' app as a springboard to explore the regulation of public sector recommender systems

In England, the Department of Health and Social Care (DHSC) offers an increasingly wide range of public health-related apps. One of the most recently launched is the ‘Food Scanner’, which aims to provide ‘swap suggestions, which means finding healthier choices for your family is easier than ever!’.

This is part of a broader public health effort to tackle, among other issues, child obesity, and is currently supported by a strong media push aimed primarily at parents. As the parent of two young children, this clearly caught my attention.

The background for this public health intervention is clear:

Without realising it, we are all eating too much sugar, saturated fat and salt. Over time this can lead to harmful changes on the inside and increases the risk of serious diseases in the future. Childhood obesity is a growing issue with figures showing that in England, more than 1 in 4 children aged 4-to 5-years-old and more than 1 in 3 children aged 10 and 11-years-old are overweight or obese.

The Be Food Smart campaign empowers families to take control of their diet by making healthier food and drink choices. The free app works by scanning the barcode of products, revealing the total sugar, saturated fat and salt inside and providing hints and tips adults plus fun food detectives activities for kids.

No issues with that. My family and myself could do with a few healthier choices. So I downloaded the app and started playing around.

As I scanned a couple of (unavoidably) branded products from the cupboard, I realised that the swaps were not for generic, alternative, healthier products, but also for branded products (often of a different brand). While this has the practical advantage of specifying the recommended healthier alternative in an ‘actionable’ manner for the consumer, this made my competition lawyer part of the brain uneasy.

The proposed swaps were (necessarily) ranked and limited, with a ‘top 3’ immediately on display, and with a possibility to explore further swaps not too easy to spot (unless you scrolled down to the bottom). The different offered swaps also had a ‘liked’ button with a counter (still in very low numbers, probably because the app is very new), but those ‘likes’ did not seem to establish ranking (or alter it?), as lower ranked items could have higher like counts (in my limited experiment).

I struggled to make sense of how products are chosen and presented. This picked my interest, so I looked at how the swaps ‘work’.

The in-app information explained that:

How do we do this?

We look into 3 aspects of the product that you have scanned:
1) Product name; so we can try and find similar products based on the words used within the name.
2) Ingredients list; so we can try and find similar products based on the ingredients of the product you have scanned.
3) Pack size; finally we look into the size of the product you have scanned so that, if have scanned a 330ml can, we can try and show you another can-sized product rather than a 1 litre bottle.

How are they ordered?

We have a few rules as to what we show within the top 3. We reserve spaces for:
1) The same manufacturer; if you have scanned a particular brand we will do our best to try and find a healthier version of that same brand which qualifies for a good choice badge.
2) The same supermarket; if you have scanned a supermarket product we will again do our best to show you an alternative from the same store.
3) Partner products; there are certain products which team up with Change4life that we will try and show if they match the requirements of the products you have scanned.

I could see that convenience and a certain element of ‘competition neutrality’ were clearly at play, but a few issues bothered me, especially as the interaction between manufacturer/supermarket is not too clear and there is a primary but nebulous element of preferencing that I was not expecting in an app meant to provide product-based information. I could see myself spending the night awake, trying to find out how that ‘partnership’ is structured, what are the conditions for participating, if there are any financial flows to the Department and/or to partner organisations, etc.

I also realised some quirks or errors in the way information is processed and presented by the Food Scanner app, such as the exact same product (in different format) being assigned different ‘red light’ classifications (see the Kellogg’s Corn Flakes example on the side bar). At a guess, it could be that these divergences come from the fact that there is no single source for the relevant information (it would seem that ‘The nutrient data provided in the app is supplied by Brandbank and FoodSwitch’) and that there is not an entity overseeing the process and curating the data as necessary. In fact, DHSC’s terms and conditions for the Food Scanner app (at 6.10) explicitly state that ‘We do not warrant that any such information is true or accurate and we exclude all liability in respect of the accuracy, completeness, fitness for purpose or legality of that information’ . Interesting…

It is also difficult to see how different elements of the red light system (ie sugar vs saturated fat vs salt) are subject to trade-offs as eg, sometimes, a red/green/yellow product is recommended swapping with a yellow/yellow/yellow product. Working out the scoring system behind such recommendations seems difficult, as there will necessarily be a trade off between limiting (very) high levels of one of the elements against recommending products that are ‘not very healthy’ on all counts. There has to be a system behind this — in the end, there has to be an algorithm underpinning the app. But how does it work and what science informs it?

These are all questions I am definitely interested in exploring. However, I called it a night and planned to look for some help to investigate this properly (a small research project is in the making and I have recruited a fantastic research associate — keep an eye on the blog for more details). For now, I can only jot down a few thoughts on things that will be interesting to explore, to which I really have no direct answers.

The Food Scanner is clearly a publicly endorsed (and owned? developed?) recommender system. However, using a moderate research effort, it is very difficult to access useful details on how it works. There is no published algorithmic transparency template (that I could find). The in-app explanations of how the recommender system works raise more questions than they answer.

There is also no commitment by the DHSC to the information provided being ‘true or accurate’, not to mention complete. This displaces the potential liability and all the accountability for the information on display to (a) Brandbank, a commercial entity within the multinational Nielsen conglomerate, and to (b) Foodswitch, a data-technology platform developed by The George Institute for Global Health. The role of these two institutions, in particular concerning the ‘partnership’ between manufacturers and Change4life (now ‘Better Health’ and, effectively, the Office for Health Improvement & Disparities in the DHSC?), is unclear. It is also unclear whether the combination of the datasets operated by both entities is capable of providing a sufficiently comprehensive representation of the products effectively available in England and, in any case, it seems clear to me that there is a high risk (or certainty) that non mass production/consumption ‘healthy products’ are out of the equation. How this relates to broader aspects of competition, but also of public health policy, can only raise questions.

Additionally, all of this raises quite a few issues from the perspective of the trustworthiness that this type of app can command, as well as the broader competition law implications resulting from the operation of the Food Scanner.

And I am sure that more and more questions will come to mind as I spend more time obsessing about it.

Beyond the specificities of the case, it seems to me that the NHS Food Scanner app is a good springboard to explore the regulation of public sector recommender systems more generally — or, rather, some of the risks implicit in the absence of specific regulation and the difficulties in applying standard regulatory mechanisms (and, perhaps, especially competition law) in this context. Hopefully, there will be some interesting research findings to report by the summer. Stay tuned, and keep healthy!

Digital technologies, public procurement and sustainability: some exploratory thoughts

download.jpeg

** This post is based on the seminar given at the Law Department of Pompeu Fabra University in Barcelona, on 7 November 2019. The slides for the seminar are available here. Please note that some of the issues have been rearranged. I am thankful to participants for the interesting discussion, and to Dr Lela Mélon and Prof Carlos Gómez Ligüerre for the kind invitation to participate in this activitity of their research group on patrimonial law. I am also grateful to Karolis Granickas for comments on an earlier draft. The standard disclaimer applies.**

Digital technologies, public procurement and sustainability: some exploratory thoughts

1. Introductory detour

The use of public procurement as a tool to further sustainability goals is not a new topic, but rather the object of a long-running discussion embedded in the broader setting of the use of procurement for the pursuit of horizontal or secondary goals—currently labelled smart or strategic procurement. The instrumentalisation of procurement for (quasi)regulatory purposes gives rise to a number of issues, such as: regulatory transfer; the distortion of the very market mechanisms on which procurement rules rely as a result of added regulatory layers and constraints; legitimacy and accountability issues; complex regulatory impact assessments; professionalisation issues; etc.

Discussions in this field are heavily influenced by normative and policy positions, which are not always clearly spelled out but still drive most of the existing disagreement. My own view is that the use of procurement for horizontal policies is not per se desirable. The simple fact that public expenditure can act as a lever/incentive to affect private (market) behaviour does not mean that it should be used for that purpose at every opportunity and/or in an unconstrained manner. Procurement should not be used in lieu of legislation or administrative regulation where it is a second-best regulatory tool. Embedding regulatory elements that can also achieve horizontal goals in the procurement process should only take place where it has clear synergies with the main goal of procurement: the efficient satisfaction of public sector needs and/or needs in the public interest. This generates a spectrum of potential uses of procurement of a different degree of desirability.

At one end, and at its least desirable, procurement can and is used as a trade barrier for economic protectionism. In my view, this should not happen. At the other end of the spectrum, at its most desirable, procurement can and is (sometimes) used in a manner that supports environmental sustainability and technical innovation. In my view, this should happen, and more than it currently does. In between these two ends, there are uses of procurement for the promotion of labour and social standards, as well as for the promotion of human rights. Controversial as this position is, in my view, the use of procurement for the pursuit of those goals should be subjected to strict proportionality analysis in order to make sure that the secondary goal does not prevent the main purpose of the efficient satisfaction of public sector needs and/or needs in the public interest.

From a normative perspective, thus, I think that there is a wide space of synergy between procurement and environmental sustainability—which goes beyond green procurement and extends to the use of procurement to support a more circular economy—and that this can be used more effectively than is currently the case, due to emerging innovative uses of digital technologies for procurement governance.

This is the topic in which I would like to concentrate, to formulate some exploratory thoughts. The following reflections are focused on the EU context, but hopefully they are of a broader relevance. I first zoom in on the strategic priorities of fostering sustainability through procurement (2) and the digitalisation of procurement (3), as well as critically assess the current state of development of digital technologies for procurement governance (4). I then look at the interaction between both strategic goals, in terms of the potential for sustainable digital procurement (5), which leads to specific discussion of the need for an enabling data architecture (6), the potential for AI and sustainable procurement (7), the potential for the implementation of blockchains for sustainable procurement (8) and the need to refocus the emerging guidelines on the procurement of digital technologies to stress their sustainability dimension (9). Some final thoughts conclude (10).

2. Public procurement and sustainability

As mentioned above, the use of public procurement to promote sustainability is not a new topic. However, it has been receiving increasing attention in recent policy-making and legislative efforts (see eg this recent update)—though they are yet to translate in the level of practical change required to make a relevant contribution to pressing challenges, such as the climate emergency (for a good critique, see this recent post by Lela Mélon).

Facilitating the inclusion of sustainability-related criteria in procurement was one of the drivers for the new rules in the 2014 EU Public Procurement Package, which create a fairly flexible regulatory framework. Most remaining problems are linked to the implementation of such a framework, not its regulatory design. Cost, complexity and institutional inertia are the main obstacles to a broader uptake of sustainable procurement.

The European Commission is alive to these challenges. In its procurement strategy ‘Making Procurement work in and for Europe’ [COM(2017) 572 final; for a critical assessment, see here], the Commission stressed the need to facilitate and to promote the further uptake of strategic procurement, including sustainable procurement.

However, most of its proposals are geared towards the publication of guidance (such as the Buying Green! Handbook), standardised solutions (such as the library of EU green public procurement criteria) and the sharing of good practices (such as in this library of use cases) and training materials (eg this training toolkit). While these are potentially useful interventions, the main difficulty remains in their adoption and implementation at Member State level.

EcoInno.png

While it is difficult to have a good view of the current situation (see eg the older studies available here, and the terrible methodology used for this 2015 PWC study for the Commission), it seems indisputable that there are massive differences across EU Member States in terms of sustainability-oriented innovation in procurement.

Taking as a proxy the differences that emerge from the Eco-Innovation Scoreboard, it seems clear that this very different level of adoption of sustainability-related eco-innovation is likely reflective of the different approaches followed by the contracting authorities of the different Member States.

Such disparities create difficulties for policy design and coordination, as is acknowledged by the Commission and the limitations of its procurement strategy. The main interventions are thus dependent on Member States (and their sub-units).

3. Public procurement digitalisation beyond e-Procurement

Similarly to the discussion above, the bidirectional relationship between the use of procurement as a tool to foster innovation, and the adaptation of procurement processes in light of technological innovations is not a new issue. In fact, the transition to electronic procurement (eProcurement) was also one of the main drivers for the revision of the EU rules that resulted in the 2014 Public Procurement Package, as well as the flanking regulation of eInvoicing and the new rules on eForms. eProcurement (broadly understood) is thus an area where further changes will come to fruition within the next 5 years (see timeline below).

Picture 1.png

However, even a maximum implementation of the EU-level eProcurement rules would still fall short of creating a fully digitalised procurement system. There are, indeed, several aspects where current technological solutions can enable a more advanced and comprehensive eProcurement system. For example, it is possible to automate larger parts of the procurement process and to embed compliance checks (eg in solutions such as the Prozorro system developed in Ukraine). It is also possible to use the data automatically generated by the eProcurement system (or otherwise consolidated in a procurement register) to develop advanced data analytics to support procurement decision-making, monitoring, audit and the deployment of additional screens, such as on conflicts of interest or competition checks.

Progressing the national eProcurement systems to those higher levels of functionality would already represent progress beyond the mandatory eProcurement baseline in the 2014 EU Public Procurement Package and the flanking initiatives listed above; and, crucially, enabling more advanced data analytics is one of the effects sought with the new rules on eForms, which aim to significantly increase the availability of (better) procurement data for transparency purposes.

Although it is an avenue mainly explored in other jurisdictions, and currently in the US context, it is also possible to create public marketplaces akin to Amazon/eBay/etc to generate a more user-friendly interface for different types of catalogue-based eProcurement systems (see eg this recent piece by Chris Yukins).

Beyond that, the (further) digitalisation of procurement is another strategic priority for the European Commission; not only for procurement’s sake, but also in the context of the wider strategy to create an AI-friendly regulatory environment and to use procurement as a catalyst for innovations of broader application – along lines of the entrepreneurial State (Mazzucato, 2013; see here for an adapted shorter version).

Indeed, the Commission has formulated a bold(er) vision for future procurement systems based on emerging digital technologies, in which it sees a transformative potential: “New technologies provide the possibility to rethink fundamentally the way public procurement, and relevant parts of public administrations, are organised. There is a unique chance to reshape the relevant systems and achieve a digital transformation” (COM(2017) 572 fin at 11).

Even though the Commission has not been explicit, it may be worth trying to map which of the currently emerging digital technologies could be of (more direct) application to procurement governance and practice. Based on the taxonomy included in a recent OECD report (2019a, Annex C), it is possible to identify the following types and specific technologies with potential procurement application:

AI solutions

  • Virtual Assistants (Chat bots or Voice bots): conversational, computer-generated characters that simulate a conversation to deliver voice- or text-based information to a user via a Web, kiosk or mobile interface. A VA incorporates natural-language processing, dialogue control, domain knowledge and a visual appearance (such as photos or animation) that changes according to the content and context of the dialogue. The primary interaction methods are text-to-text, text-to-speech, speech-to-text and speech-to-speech;

  • Natural language processing: technology involves the ability to turn text or audio speech into encoded, structured information, based on an appropriate ontology. The structured data may be used simply to classify a document, as in “this report describes a laparoscopic cholecystectomy,” or it may be used to identify findings, procedures, medications, allergies and participants;

  • Machine Learning: the goal is to devise learning algorithms that do the learning automatically without human intervention or assistance;

  • Deep Learning: allows computational models that are composed of multiple processing layers to learn representations of data with multiple levels of abstraction;

  • Robotics: deals with the design, construction, operation, and use of robots, as well as computer systems for their control, sensory feedback, and information processing;

  • Recommender systems: subclass of information filtering system that seeks to predict the "rating" or "preference" that a user would give to an item;

  • Expert systems: is a computer system that emulates the decision-making ability of a human expert;

Digital platforms

  • Distributed ledger technology (DLT): is a consensus of replicated, shared, and synchronized digital data geographically spread across multiple sites, countries, or institutions. There is no central administrator or centralised data storage. A peer-to-peer network is required as well as consensus algorithms to ensure replication across nodes is undertaken; Blockchain is one of the most common implementation of DLT;

  • Smart contracts: is a computer protocol intended to digitally facilitate, verify, or enforce the negotiation or performance of a contract;

  • IoT Platform: platform on which to create and manage applications, to run analytics, and to store and secure your data in order to get value from the Internet of Things (IoT);

Not all technologies are equally relevant to procurement—and some of them are interrelated in a manner that requires concurrent development—but these seem to me to be those with a higher potential to support the procurement function in the future. Their development needs not take place solely, or primarily, in the context of procurement. Therefore, their assessment should be carried out in the broader setting of the adoption of digital technologies in the public sector.

4. Digital technologies & the public sector, including procurement

The emergence of the above mentioned digital technologies is now seen as a potential solution to complex public policy problems, such as the promotion of more sustainable public procurement. Keeping track of all the potential use cases in the public sector is difficult and the hype around buzzwords such as AI, blockchain or the internet of things (IoT) generates inflated claims of potential solutions to even some of the most wicked public policy problems (eg corruption).

This is reflective of the same hype in private markets, and in particular in financial and consumer markets, where AI is supposed to revolutionise the way we live, almost beyond recognition. There also seems to be an emerging race to the top (or rather, a copy-cat effect) in policy-making circles, as more and more countries adopt AI strategies in the hope of harnessing the potential of these technologies to boost economic growth.

In my view, digital technologies are receiving excessive attention. These are immature technologies and their likely development and usefulness is difficult to grasp beyond a relatively abstract level of potentiality. As such, I think these technologies may be receiving excessive attention from policy-makers and possibly also disproportionate levels of investment (diversion).

The implementation of digital technologies in the public sector faces a number of specific difficulties—not least, around data availability and data skills, as stressed in a recent OECD report (2019b). While it is probably beyond doubt that they will have an impact on public governance and the delivery of public services, it is more likely to be incremental rather than disruptive or revolutionary. Along these lines, another recent OECD report (2019c) stresses the need to take a critical look at the potential of artificial intelligence, in particular in relation to public sector use cases.

The OECD report (2019a) mentioned above shows how, despite these general strategies and the high levels of support at the top levels of policy-making, there is limited evidence of significant developments on the ground. This is the case, in particular, regarding the implementation of digital technologies in public procurement, where the OECD documents very limited developments (see table below).

Picture 1.png

Of course, this does not mean that we will not see more and more widespread developments in the coming years, but a note of caution is necessary if we are to embrace realistic expectations about the potential for significant changes resulting from procurement digitalisation. The following sections concentrate on the speculative analysis of such potential use of digital technologies to support sustainable procurement.

5. Sustainable digital procurement

Bringing together the scope for more sustainable public procurement (2), the progressive digitalisation of procurement (3), and the emergence of digital technologies susceptible of implementation in the public sector (4); the combined strategic goal (or ideal) would be to harness the potential of digital technologies to promote (more) sustainable procurement. This is a difficult exercise, surrounded by uncertainty, so the rest of this post is all speculation.

In my view, there are different ways in which digital technologies can be used for sustainability purposes. The contribution that each digital technology (DT) can make depends on its core functionality. In simple functional terms, my understanding is that:

  • AI is particularly apt for the massive processing of (big) data, as well as for the implementation of data-based machine learning (ML) solutions and the automation of some tasks (through so-called robotic process automation, RPA);

  • Blockchain is apt for the implementation of tamper-resistant/evident decentralised data management;

  • The internet of things (IoT) is apt to automate the generation of some data and (could be?) apt to breach the virtual/real frontier through oracle-enabled robotics

The timeline that we could expect for the development of these solutions is also highly uncertain, although there are expectations for some technologies to mature within the next four years, whereas others may still take closer to ten years.

© Gartner, Aug 2018.

© Gartner, Aug 2018.

Each of the core functionalities or basic strengths of these digital technologies, as well as their rate of development, will determine a higher or lower likelihood of successful implementation in the area of procurement, which is a highly information/data-sensitive area of public policy and administration. Therefore, it seems unavoidable to first look at the need to create an enabling data architecture as a priority (and pre-condition) to the deployment of any digital technologies.

6. An enabling data architecture as a priority

The importance of the availability of good quality data in the context of digital technologies cannot be over-emphasised (see eg OECD, 2019b). This is also clear to the European Commission, as it has also included the need to improve the availability of good quality data as a strategic priority. Indeed, the Commission stressed that “Better and more accessible data on procurement should be made available as it opens a wide range of opportunities to assess better the performance of procurement policies, optimise the interaction between public procurement systems and shape future strategic decisions” (COM(2017) 572 fin at 10-11).

However, despite the launch of a set of initiatives that seek to improve the existing procurement data architecture, there are still significant difficulties in the generation of data [for discussion and further references, see A Sanchez-Graells, “Data-driven procurement governance: two well-known elephant tales” (2019) 24(4) Communications Law 157-170; idem, “Some public procurement challenges in supporting and delivering smart urban mobility: procurement data, discretion and expertise”, in M Finck, M Lamping, V Moscon & H Richter (eds), Smart Urban Mobility – Law, Regulation, and Policy, MPI Studies on Intellectual Property and Competition Law (Springer 2020) forthcoming; and idem, “EU Public Procurement Policy and the Fourth Industrial Revolution: Pushing and Pulling as One?”, Working Paper for the YEL Annual Conference 2019 ‘EU Law in the era of the Fourth Industrial Revolution’].

To be sure, there are impending advances in the availability of quality procurement data as a result of the increased uptake of the Open Contracting Data Standards (OCDS) developed by the Open Contracting Partnership (OCP); the new rules on eForms; the development of eGovernment Application Programming Interfaces (APIs); the 2019 Open Data Directive; the principles of business to government data sharing (B2G data sharing); etc. However, it seems to me that the European Commission needs to exercise clearer leadership in the development of an EU-wide procurement data architecture. There is, in particular, one measure that could be easily adopted and would make a big difference.

The 2019 Open Data Directive (Directive 2019/1024/EU, ODD) establishes a special regime for high-value datasets, which need to be available free of charge (subject to some exceptions); machine readable; provided via APIs; and provided as a bulk download, where relevant (Art 14(1) ODD). Those high-value datasets are yet to be identified by the European Commission through implementing acts aimed at specifying datasets within a list of thematic categories included in Annex I, which includes the following datasets: geospatial; Earth observation and environment; meteorological; statistics; companies and company ownership; and mobility. In my view, most relevant procurement data can clearly fit within the category of statistical information.

More importantly, the directive specifies that the ‘identification of specific high-value datasets … shall be based on the assessment of their potential to: (a) generate significant socioeconomic or environmental benefits and innovative services; (b) benefit a high number of users, in particular SMEs; (c) assist in generating revenues; and (d) be combined with other datasets’ (Art 14(2) ODD). Given the high-potential of procurement data to unlock (a), (b) and (d), as well as, potentially, generate savings analogous to (c), the inclusion of datasets of procurement information in the future list of high-value datasets for the purposes of the Open Data Directive seems like an obvious choice.

Of course, there will be issues to iron out, as not all procurement information is equally susceptible of generating those advantages and there is the unavoidable need to ensure an appropriate balance between the publication of the data and the protection of legitimate (commercial) interests, as recognised by the Directive itself (Art 2(d)(iii) ODD) [for extended discussion, see here]. However, this would be a good step in the direction of ensuring the creation of a forward-looking data architecture.

At any rate, this is not really a radical idea. At least half of the EU is already publishing some public procurement open data, and many Eastern Partnership countries publish procurement data in OCDS (eg Moldova, Ukraine, Georgia). The suggestion here would bring more order into this bottom-up development and would help Member States understand what is expected, where to get help from, etc, as well as ensure the desirable level of uniformity, interoperability and coordination in the publication of the relevant procurement data.

Beyond that, in my view, more needs to be done to also generate backward-looking databases that enable the public sector to design and implement adequate sustainability policies, eg in relation to the repair and re-use of existing assets.

Only when the adequate data architecture is in place, will it be possible to deploy advanced digital technologies. Therefore, this should be given the highest priority by policy-makers.

7. Potential AI uses for sustainable public procurement

If/when sufficient data is available, there will be scope for the deployment of several specific implementations of artificial intelligence. It is possible to imagine the following potential uses:

  • Sustainability-oriented (big) data analytics: this should be relatively easy to achieve and it would simply be the deployment of big data analytics to monitor the extent to which procurement expenditure is pursuing or achieving specified sustainability goals. This could support the design and implementation of sustainability-oriented procurement policies and, where appropriate, it could generate public disclosure of that information in order to foster civic engagement and to feedback into political processes.

  • Development of sustainability screens/indexes: this would be a slight variation of the former and could facilitate the generation of synthetic data visualisations that reduced the burden of understanding the data analytics.

  • Machine Learning-supported data analysis with sustainability goals: this could aim to train algorithms to establish eg the effectiveness of sustainability-oriented procurement policies and interventions, with the aim of streamlining existing policies and to update them at a pace and level of precision that would be difficult to achieve by other means.

  • Sustainability-oriented procurement planning: this would entail the deployment of algorithms aimed at predictive analytics that could improve procurement planning, in particular to maximise the sustainability impact of future procurements.

Moreover, where clear rules/policies are specified, there will be scope for:

  • Compliance automation: it is possible to structure procurement processes and authorisations in such a way that compliance with pre-specified requirements is ensured (within the eProcurement system). This facilitates ex ante interventions that could minimise the risk of and the need for ex post contractual modifications or tender cancellations.

  • Recommender/expert systems: it would be possible to use machine learning to assist in the design and implementation of procurement processes in a way that supported the public buyer, in an instance of cognitive computing that could accelerate the gains that would otherwise require more significant investments in professionalisation and specialisation of the workforce.

  • Chatbot-enabled guidance: similarly to the two applications above, the use of procurement intelligence could underpin chatbot-enabled systems that supported the public buyers.

A further open question is whether AI could ever autonomously generate new sustainability policies. I dare not engage in such exercise in futurology…

8. Limited use of blockchain/DLTs for sustainable public procurement

Picture 1.png

By contrast with the potential for big data and the AI it can enable, the potential for blockchain applications in the context of procurement seems to me much more limited (for further details, see here, here and here). To put it simply, the core advantages of distributed ledger technologies/blockchain derive from their decentralised structure.

Whereas there are several different potential configurations of DLTs (see eg Rauchs et al, 2019 and Alessie et al, 2019, from where the graph is taken), the configuration of the blockchain affects its functionalities—with the highest levels of functionality being created by open and permissionless blockchains.

However, such a structure is fundamentally uninteresting to the public sector, which is unlikely to give up control over the system. This has been repeatedly stressed and confirmed in an overview of recent implementations (OECD, 2019a:16; see also OECD, 2018).

Moreover, even beyond the issue of public sector control, it should be stressed that existing open and permissionless blockchains operate on the basis of a proof-of-work (PoW) consensus mechanism, which has a very high carbon footprint (in particular in the case of Bitcoin). This also makes such systems inapt for sustainable digital procurement implementations.

Therefore, sustainable blockchain solutions (ie private & permissioned, based on proof-of-stake (PoS) or a similar consensus mechanisms), are likely to present very limited advantages for procurement implementation over advanced systems of database management—and, possibly, even more generally (see eg this interesting critical paper by Low & Mik, 2019).

Moreover, even if there was a way to work around those constraints and design a viable technical solution, that by itself would still not fix underlying procurement policy complexity, which will necessarily impose constraints on technologies that require deterministic coding, eg

  • Tenders on a blockchain - the proposals to use blockchain for the implementation of the tender procedure itself are very limited, in my opinion, by the difficulty in structuring all requirements on the basis of IF/THEN statements (see here).

  • Smart (public) contracts - the same constraints apply to smart contracts (see here and here).

  • Blockchain as an information exchange platform (Mélon, 2019, on file) - the proposals to use blockchain mechanisms to exchange information on best practices and tender documentation of successful projects could serve to address some of the confidentiality issues that could arise with ‘standard’ databases. However, regardless of the technical support to the exchange of information, the complexity in identifying best practices and in ensuring their replicability remains. This is evidenced by the European Commission’s Initiative for the exchange of information on the procurement of Large Infrastructure Projects (discussed here when it was announced), which has not been used at all in its first two years (as of 6 November 2019, there were no publicly-available files in the database).

9. Sustainable procurement of digital technologies

A final issue to take into consideration is that the procurement of digital technologies needs to itself incorporate sustainability considerations. However, this does not seem to be the case in the context of the hype and over-excitement with the experimentation/deployment of those technologies.

Indeed, there are emerging guidelines on procurement of some digital technologies, such as AI (UK, 2019) (WEF, 2019) (see here for discussion). However, as could be expected, these guidelines are extremely technology-centric and their interaction with broader procurement policies is not necessarily straightforward.

I would argue that, in order for these technologies to enable a more sustainable procurement, sustainability considerations need to be embedded not only in their application, but may well require eg an earlier analysis of whether the life-cycle of existing solutions warrants replacement, or the long-term impacts of the implementation of digital technologies (eg in terms of life-cycle carbon footprint).

Pursuing technological development for its own sake can have significant environmental impacts that must be assessed.

10. Concluding thoughts

This (very long…) blog post has structured some of my thoughts on the interaction of sustainability and digitalisation in the context of public procurement. By way of conclusion, I would just try to translate this into priorities for policy-making (and research). Overall, I believe that the main area of effort for policy-makers should now be in creating an enabling data architecture. Its regulation can thus focus research in the short term. In the medium-term, and as use cases become clearer in the policy-making sphere, research should be moving towards the design of digital technology-enabled solutions (for sustainable public procurement, but not only) and their regulation, governance and social impacts. The long-term is too difficult for me to foresee, as there is too much uncertainty. I can only guess that we will cross that bridge when/if we get there…