Intelligenza artificiale, riduzione della competitività e danno ai consumatori: un paper approfondito dell’autorità britannica

L’autorità britannica per la concorrenza  fa uscire un paper introduttivo  al tema in oggetto, aprendo alle consultazioni. Si tratta di Algorithms: How they can reduce competition and harm consumers, 19.01.2021, della Competition and Markets Authority (CMA).

Il paper è approfondito e interessante: affronta un tema non nuovo ma finora poco studiato (lo dice lo stessa CMA).

Si affrontano i danni possibilmente derivanti dalla personalizzazione sia tramite prezzi (personalized pricing) sia tramite altre modalità.

I prezzi personalizzati possono talora essere benefici. Talatra però <<personalised pricing could lead to consumer harm. The conditions under which competition authorities might be concerned about personalised pricing are outlined in an OFT economics paper in 2013, and include where there is insufficient competition (i.e. monopolist price discrimination), where personalised pricing is particularly complex or lacking transparency to consumers and/or where it is very costly for firms to implement . In addition, personalised pricing could harm overall economic efficiency if it causes consumers to lose trust in online markets. It could also be harmful for economic efficiency when personalised pricing increases search and transaction costs, such as consumers needing to shop around or take significant or costly steps to avoid being charged a premium>> (p. ; v. poi il § Complex and opaque pricing techniques).

Quanto ai danni da personalizzazione non su prezzo: Personalised rankings, Recommendation and filtering algorithms, Manipulating user journeys, Algorithmic discrimination (qui ci sono invero già moltssimi scritti teorici, ma poco attenti ai riscontri paratici), Geographic targeting, Unfair ranking and design, Preferencing others for commercial advantage, dark patterns (nascondono ciò che l’utente sta per accettare), etc.

C’è poi una sezione sulle pratiche escludenti (self preferencing -noti sono gli addebiti al marketplace di Amazon-, Manipulating platform algorithms and unintended exclusion, Predatory pricing).

Altra sezione è quella sull’algorithmic collusion : Facilitate explicit coordination, Hub-and-spoke , Autonomous tacit collusion.

Il senso pratico dei britannici emerge nella sezione 3, Techniques to investigate these harms (distinguendo tra ipotesi con accesso diretto ai dati e agli algoritmi e senza accesso direto)

Infine, sez. 4, considerazioni di policy. C’è un seria ragione per intervenire :

  • The opacity of algorithmic systems and the lack of operational transparency make it hard for consumers and customers to effectively discipline firms. Many of the practices we have outlined regarding online choice architecture are likely to become more subtle and challenging to detect.
  • Some of the practices we outline involve the algorithmic systems of firms that occupy important strategic positions in the UK economy (and internationally).

In conclusione (§ 5) :

Firms maximise profit. In pursuing this objective, without adequate governance, firms designing machine learning systems to achieve this will continually refine and optimise for this using whatever data is useful. Algorithmic systems can interact with pre-existing sources of market failure, such as market power and consumers’ behavioural biases. This means that using some algorithmic systems may result in products that are harmful. As regulators, we need to ensure that firms have incentives to adopt appropriate standards and checks and balances.

The market positions of the largest gateway platforms are substantial and appear to be durable, so unintended harms from their algorithmic systems can have large impacts on other firms that are reliant on the gateway platforms for their business. If algorithmic systems are not explainable and transparent, it may also make it increasingly difficult for regulators to challenge ineffective measures to counter harms.

Due to the various harms identified in this paper, firms must ensure that they are able to explain how their algorithmic systems work.