La giustizia amministrativa sui concetti di “algoritmo” e di “intelligenza artificiale”

Un bando gara prevedeva la fornitura di pacemaker e tra i criteri per l’assegnazkione <<il parametro tabellare “Algoritmo di prevenzione+trattamento delle tachiaritmie atriali” al quale assegnare 15 punti per l’ipotesi di presenza di entrambi gli algoritmi e 7 punti nel caso di “presenza del solo algoritmo di prevenzione o del solo trattamento delle tachiaritmie atriali>>.

Il TAR, puntualizzato che “la legge di gara richiede unicamente la presenza di un algoritmo di trattamento (senza altro specificare)” ha definito il concetto di algoritmo, <<affermando che “con esso ci si richiama, semplicemente, a una sequenza finita di istruzioni, ben definite e non ambigue, così da poter essere eseguite meccanicamente e tali da produrre un determinato risultato (come risolvere un problema oppure eseguire un calcolo e, nel caso di specie, trattare un’aritmia)“. Ha aggiunto, il primo giudice, al fine di meglio circoscrivere il concetto, che “non deve confondersi la nozione di “algoritmo” con quella di “intelligenza artificiale”, riconducibile invece allo studio di “agenti intelligenti”, vale a dire allo studio di sistemi che percepiscono ciò che li circonda e intraprendono azioni che massimizzano la probabilità di ottenere con successo gli obiettivi prefissati…..sono tali, ad esempio, quelli che interagiscono con l’ambiente circostante o con le persone, che apprendono dall’esperienza (machine learning), che elaborano il linguaggio naturale oppure che riconoscono volti e movimenti”.     4. Definita la nozione di algoritmo, il primo giudice ha così concluso il suo percorso argomentativo: “l’algoritmo di trattamento dell’aritmia non è altro che l’insieme di passaggi (di stimoli creati dal pacemaker secondo istruzioni predefinite) necessari al trattamento del singolo tipo di aritmia. Questo concetto non include necessariamente, invece, come erroneamente ritenuto dalla stazione appaltante, che il dispositivo debba essere in grado di riconoscere in automatico l’esigenza (quindi di diagnosticare il tipo di aritmia) e somministrare in automatico la corretta terapia meccanica (trattamento). In altre parole, il dato testuale della lettera di invito non richiede che l’algoritmo di trattamento, al verificarsi dell’episodio aritmico, sia avviato dal dispositivo medesimo in automatico. Tale caratteristica attiene a una componente ulteriore, non indicata nella legge di gara, vale a dire a un algoritmo di intelligenza artificiale nella diagnosi dell’aritmia e avvio del trattamento. Fondatamente, pertanto, Abbott ha dedotto l’erroneità della valutazione della commissione di gara che – pur in presenza di un algoritmo di trattamento delle aritmie nel proprio dispositivo (vale a dire l’algoritmo NIPS, pacificamente definibile come tale) – ha attribuito soli 7 punti anziché 15 al dispositivo offerto. Infatti, la commissione ha confuso, sovrapponendoli indebitamente, il concetto di algoritmo con quello di avvio automatico del trattamento”.>>.

Ora interviene il Consiglio di STato, sez. III,  con sentenza 7891 del 25.11.2021, che sul punto così motiva, distaccandosi dal TAR: <<Non v’è dubbio che la nozione comune e generale di algoritmo riporti alla mente “semplicemente una sequenza finita di istruzioni, ben definite e non ambigue, così da poter essere eseguite meccanicamente e tali da produrre un determinato risultato” (questa la definizione fornite in prime cure). Nondimeno si osserva che la nozione, quando è applicata a sistemi tecnologici, è ineludibilmente collegata al concetto di automazione ossia a sistemi di azione e controllo idonei a ridurre l’intervento umano. Il grado e la frequenza dell’intervento umano dipendono dalla complessità e dall’accuratezza dell’algoritmo che la macchina è chiamata a processare. Cosa diversa è l’intelligenza artificiale. In questo caso l’algoritmo contempla meccanismi di machine learnig e crea un sistema che non si limita solo ad applicare le regole sofware e i parametri preimpostati (come fa invece l’algoritmo “tradizionale”) ma, al contrario, elabora costantemente nuovi criteri di inferenza tra dati e assume decisioni efficienti sulla base di tali elaborazioni, secondo un processo di apprendimento automatico.

9.2. Nel caso di specie, per ottenere la fornitura di un dispositivo con elevato grado di automazione non occorreva che l’amministrazione facesse espresso riferimenti a elementi di intelligenza artificiale, essendo del tutto sufficiente – come ha fatto – anche in considerazione della peculiarità del prodotto (pacemaker dotati, per definizione, di una funzione continuativa di “sensing” del ritmo cardiaco e di regolazione dello stesso) il riferimento allo specifico concetto di algoritmo, ossia ad istruzioni capaci di fornire un efficiente grado di automazione, ulteriore rispetto a quello di base, sia nell’area della prevenzione che del trattamento delle tachiaritmie atriali. I pacemakers moderni e di alta fascia sono infatti dotati di un numero sempre maggiore di parametri programmabili e di algoritmi specifici progettati per ottimizzare la terapia di stimolazione in rapporto alle caratteristiche specifiche del paziente. L’amministrazione ha espresso preferenza per la presenza congiunta di algoritmi di prevenzione e trattamento delle “tachiaritmie atriali”.>>

L’intelligenza artificiale può essere “inventor” per il diritto australiano

La querelle aperta dal dr. Thaler con la sua DABUS machine, che sta cercando di ottenere brevetto inventivo a nome proprio ma come avente causa dall’inventore costituito da intelligenza artificiale (IA), trova ora una soluzione posiiva in Australia.

Qui la Corte Federale con decisione 30.07.2021, Thaler v Commissioner of Patents [2021] FCA 879, file n° VID 108 of 2021, con analitico esame,  riforma la decisione amministrativa di rifiuto.

<<Now whilst DABUS, as an artificial intelligence system, is not a legal person and cannot legally assign the invention, it does not follow that it is not possible to derive title from DABUS. The language of s 15(1)(c) recognises that the rights of a person who derives title to the invention from an inventor extend beyond assignments to encompass other means by which an interest may be conferred.>>, § 178

Per cui dr. Thaler legittimanente dichiara di essere avente causa da DABUS: <<In my view, Dr Thaler, as the owner and controller of DABUS, would own any inventions made by DABUS, when they came into his possession. In this case, Dr Thaler apparently obtained possession of the invention through and from DABUS.  And as a consequence of his possession of the invention, combined with his ownership and control of DABUS, he prima facie obtained title to the invention.  By deriving possession of the invention from DABUS, Dr Thaler prima facie derived title.  In this respect, title can be derived from the inventor notwithstanding that it vests ab initio other than in the inventor.  That is, there is no need for the inventor ever to have owned the invention, and there is no need for title to be derived by an assignment.>>, § 189.

E poi: <<In my view on the present material there is a prima facie basis for saying that Dr Thaler is a person who derives title from the inventor, DABUS, by reason of his possession of DABUS, his ownership of the copyright in DABUS’ source code, and his ownership and possession of the computer on which it resides. Now more generally there are various possibilities for patent ownership of the output of an artificial intelligence system. First, one might have the software programmer or developer of the artificial intelligence system, who no doubt may directly or via an employer own copyright in the program in any event.  Second, one might have the person who selected and provided the input data or training data for and trained the artificial intelligence system.  Indeed, the person who provided the input data may be different from the trainer.  Third, one might have the owner of the artificial intelligence system who invested, and potentially may have lost, their capital to produce the output.  Fourth, one might have the operator of the artificial intelligence system.  But in the present case it would seem that Dr Thaler is the owner>>, §§ 193-194.

In sitnesi, <<in my view an artificial intelligence system can be an inventor for the purposes of the Act. First, an inventor is an agent noun; an agent can be a person or thing that invents.  Second, so to hold reflects the reality in terms of many otherwise patentable inventions where it cannot sensibly be said that a human is the inventor.  Third, nothing in the Act dictates the contrary conclusion.>>, § 10.

Si osservi che dr Thaler <<is the owner of copyright in DABUS’s source code. He is also the owner, is responsible for and is the operator of the computer on which DABUS operates.  But Dr Thaler is not the inventor of the alleged invention the subject of the application.  The inventor is identified on the application as “DABUS, The invention was autonomously generated by an artificial intelligence”.  DABUS is not a natural or legal person.  DABUS is an artificial intelligence system that incorporates artificial neural networks.>>, § 8

Avevo segnalato il precedente inglese contrario con post 02.10.2020.

Un mese prima dr. Thaler aveva ottenuto il brevetto sulla stessa invenzione in Sud Africa: ne dà notizia www.ipwatchdog.com con post 29 luglio u.s. ove anche il link al documento amministrativo in cui si legge che l’istante è Thaler ma l’inventore è DABUS.

(notizia e link alla sentenza da gestaltlaw.com)

Intelligenza artificiale, riduzione della competitività e danno ai consumatori: un paper approfondito dell’autorità britannica

L’autorità britannica per la concorrenza  fa uscire un paper introduttivo  al tema in oggetto, aprendo alle consultazioni. Si tratta di Algorithms: How they can reduce competition and harm consumers, 19.01.2021, della Competition and Markets Authority (CMA).

Il paper è approfondito e interessante: affronta un tema non nuovo ma finora poco studiato (lo dice lo stessa CMA).

Si affrontano i danni possibilmente derivanti dalla personalizzazione sia tramite prezzi (personalized pricing) sia tramite altre modalità.

I prezzi personalizzati possono talora essere benefici. Talatra però <<personalised pricing could lead to consumer harm. The conditions under which competition authorities might be concerned about personalised pricing are outlined in an OFT economics paper in 2013, and include where there is insufficient competition (i.e. monopolist price discrimination), where personalised pricing is particularly complex or lacking transparency to consumers and/or where it is very costly for firms to implement . In addition, personalised pricing could harm overall economic efficiency if it causes consumers to lose trust in online markets. It could also be harmful for economic efficiency when personalised pricing increases search and transaction costs, such as consumers needing to shop around or take significant or costly steps to avoid being charged a premium>> (p. ; v. poi il § Complex and opaque pricing techniques).

Quanto ai danni da personalizzazione non su prezzo: Personalised rankings, Recommendation and filtering algorithms, Manipulating user journeys, Algorithmic discrimination (qui ci sono invero già moltssimi scritti teorici, ma poco attenti ai riscontri paratici), Geographic targeting, Unfair ranking and design, Preferencing others for commercial advantage, dark patterns (nascondono ciò che l’utente sta per accettare), etc.

C’è poi una sezione sulle pratiche escludenti (self preferencing -noti sono gli addebiti al marketplace di Amazon-, Manipulating platform algorithms and unintended exclusion, Predatory pricing).

Altra sezione è quella sull’algorithmic collusion : Facilitate explicit coordination, Hub-and-spoke , Autonomous tacit collusion.

Il senso pratico dei britannici emerge nella sezione 3, Techniques to investigate these harms (distinguendo tra ipotesi con accesso diretto ai dati e agli algoritmi e senza accesso direto)

Infine, sez. 4, considerazioni di policy. C’è un seria ragione per intervenire :

  • The opacity of algorithmic systems and the lack of operational transparency make it hard for consumers and customers to effectively discipline firms. Many of the practices we have outlined regarding online choice architecture are likely to become more subtle and challenging to detect.
  • Some of the practices we outline involve the algorithmic systems of firms that occupy important strategic positions in the UK economy (and internationally).

In conclusione (§ 5) :

Firms maximise profit. In pursuing this objective, without adequate governance, firms designing machine learning systems to achieve this will continually refine and optimise for this using whatever data is useful. Algorithmic systems can interact with pre-existing sources of market failure, such as market power and consumers’ behavioural biases. This means that using some algorithmic systems may result in products that are harmful. As regulators, we need to ensure that firms have incentives to adopt appropriate standards and checks and balances.

The market positions of the largest gateway platforms are substantial and appear to be durable, so unintended harms from their algorithmic systems can have large impacts on other firms that are reliant on the gateway platforms for their business. If algorithmic systems are not explainable and transparent, it may also make it increasingly difficult for regulators to challenge ineffective measures to counter harms.

Due to the various harms identified in this paper, firms must ensure that they are able to explain how their algorithmic systems work.

Ancora su IP e Intelligenza Artificiale

Nuovo documento sul rapporto tra IP e Artificial Intelligence (poi: AI).

E’ lo studio  edito da The Joint Institute for Innovation Policy (Brussels) e da IViR – University of Amsterdam , autori Christian HARTMANN e Jacqueline E. M. ALLAN nonchè rispettivamente P. Bernt HUGENHOLTZ-João P. QUINTAIS-Daniel GERVAIS, titolato <<Trends and Developments in Artificial Intelligence Challenges to the Intellectual Property Rights Framework, Final report>>, settembre 2020.

Lo studio si occupa in particolare di brevetti inventivi e diritto di autore.

V.  la sintesi e le recommendations per diritto diautore sub § 5.1, po. 116 ss :

  • Current EU copyright rules are generally sufficiently flexible to deal with the challenges posed by AI-assisted outputs.
  • The absence of (fully) harmonised rules of authorship and copyright ownership has led to divergent solutions in national law of distinct Member States in respect of AI-assisted works, which might justify a harmonisation initiative.
  • Further research into the risks of false authorship attributions by publishers of “work-like” but “authorless” AI productions, seen in the light of the general authorship presumption in art. 5 of the Enforcement Directive, should be considered.
  • Related rights regimes in the EU potentially extend to “authorless” AI productions in a variety of sectors: audio recording, broadcasting, audivisual recording, and news. In addition, the sui generis database right may offer protection to AI-assisted databases that are the result of substantial investment.
  • The creation/obtaining distinction in the sui generis right is a cause of legal uncertainty regarding the status of machine-generated data that could justify revision or clarification of the EU Database Directive.
  • Further study on the role of alternative IP regimes to protect AI-assisted outputs, such as trade secret protection, unfair competition and contract law, should be encouraged.

Si vedano poi quelle per il diritto brevettuale: sub 5.2, p. 118 ss:

  • The EPC is currently suitable to address the challenges posed by AI technologies in the context of AI-assisted inventions or outputs.
  • While the increasing use of AI systems for inventive purposes does not require material changes to the core concepts of patent law, the emergence of AI may have practical consequences for national Intellectual Property Offices (IPOs) and the EPO. Also, certain rules may in specific cases be difficult to apply to AI-assisted outputs and, where that is the case, it may be justified to make minor adjustments.
  • In the context of assessing novelty, IPOs and the EPO should consider investing in maintaining a level of technical capability that matches the technology available to sophisticated patent applicants.
  • In the context of assessing the inventive step, it may be advisable to update the EPO examination guidelines to adjust the definition of the POSITA and secondary indicia so as to track developments in AI-assisted inventions or outputs.
  • In the context of assessing sufficiency of disclosure, it would be useful to study the feasibility and usefulness of a deposit system (or similar legal mechanism) for AI algorithms and/or training data and models that would require applicants in appropriate cases to provide information that is relevant to meet this legal requirement, while including safeguards to protect applicants’ confidential information to the extent it is required under EU or international rules [forse il punto più interessante in assoluto!]
  • For the remaining potential challenges identified in this report arising out of AI-assisted inventions or outputs, it may be good policy to wait for cases to emerge to identify actual issues that require a regulatory response, if any.

Ancora su intelligenza artificiale e proprietà intellettuale: indagine dell’ufficio USA

Il tema dei rapporti tra proprietà intellettuale (PI) e intelligenza artificiale (AI) è sempre più al centro dell’attenzione.

L’ufficio brevetti e marchi statunitense (USPTO) ha appena pubblicato i dati di un’indagine (request for comments, RFC) su AI e diritti di PI (ci son state 99 risposte, v. Appendix I, da parte di enti ma anche di individuals) : USPTO’s report “Public Views on AI and IP Policy”, ottobre 2020 (prendo la notizia dal post 12.10.2020 di Eleonora Rosati/Bertrand Sautier in ipkat).

Il report (id est, le risposte riferite) è alquanto interessante. Segnalo:

1° – INVENZIONI

  • le risposte non ritengono necessarie modifiche al diritto brevettuale: alla domanda 3 (<Do current patent laws and regulations regarding inventorship need to be revised to take into account inventions where an entity or entities other than a natural person contributed to the conception of an invention?>), la maggiornza delle risposte < reflected the view that there is no need for revising patent laws and regulations on inventorship to account for inventions in which an entity or entities other than a natural person contributed to the conception of an invention.>, p. 5. Alla domanda 4 (<Should an entity or entities other than a natural person, or company to which a natural person assigns an invention, be able to own a patent on the AI invention? For example: Should a company who trains the artificial intelligence process that creates the invention be able to be an owner?>) , la larga maggiorahza ha detto che <no changes should be necessary to the current U.S. law—that only a natural person or a company, via assignment, should be considered the owner of a patent or an invention. However, a minority of responses stated that while inventorship and ownership rights should not be extended to machines, consideration should be given to expanding ownership to a natural person: (1) who trains an AI process, or (2) who owns/controls an AI system>, p. 7
  • sulla domanda 10 (<Are there any new forms of intellectual property protections that are needed for AI inventions, such as data protection? Data is a foundational component of AI. Access to data>), le risposte sono invece divise: <Commenters were nearly equally divided between the view that new intellectual property rights were necessary to address AI inventions and the belief that the current U.S. IP framework was adequate to address AI inventions. Generally, however, commenters who did not see the need for new forms of IP rights suggested that developments in AI technology should be monitored to ensure needs were keeping pace with AI technology developments.
    The majority of opinions requesting new IP rights focused on the need to protect the data associated with AI, particularly ML. For example, one opinion stated that “companies that collect large amounts of data have a competitive advantage relative to new entrants to the market. There could be a mechanism to provide access to the repositories of data collected by large technology companies such that proprietary rights to the data are protected but new market entrants and others can use such data to train and develop their AI.”>, p. 15

2 – ALTRI DIRITTI DI PI

  • domanda 1: la creazione da parte di AI è proteggibile come diritto di autore? No de iure condito e pure de iure condendo: <The vast majority of commenters acknowledged that existing law does not permit a non-human to be an author (outside of the work-for-hire doctrine, which creates a legal fiction for non-human employers to be authors under certain circumstances); they also responded that this should remain the law. One comment stated: “A work produced by an AI algorithm or process, without intervention of a natural person contributing expression to the resulting works, does not, and should not qualify as a work of authorship protectable under U.S. copyright law.”109 Multiple commenters noted that the rationale for this position is to support legal incentives for humans to create new works.110 Other commenters noted that AI is a tool, similar to other tools that have been used in the past to create works: “Artificial intelligence is a tool, just as much as Photoshop, Garage Band, or any other consumer software in wide use today … the current debate over whether a non-human object or process can be ‘creative’ is not new; the government has long resisted calls to extend authorship to corporations or entities that are not natural humans>, p. 20-21
  • domanda 2: quale livello di coinvolgimento umano serve allora per la proteggibilità [domanda molto rilevante nella pratica!!] ? Non si può che vederlo caso per caso:  <More broadly speaking, commenters’ response to this question either referred back to their response to the first question without comment (stating that human involvement is necessary for copyright protection) or referred back and made some further observations or clarifications, often pointing out that each scenario will require fact-specific, case-by-case consideration. Several commenters raised or reiterated their view that natural persons, for the foreseeable future, will be heavily involved in the use of AI, such as when designing models and algorithms, identifying useful training data and standards, determining how technology will be used, guiding or overriding choices made by algorithms, and selecting which outputs are useful or desirable in some way. The commenters thus predicted that the outputs of AI will be heavily reliant on human creativity>, p. 22.
  • dom. 7 sull’uso di AI nelle ricerche sui marchi: v. la distinzione tra uso dell’USPTO  e uso dei titolari di marchio, p. 31 ss.
  • dom. 9 sulla protezione dei database, p. 36 ss.: la normativa attuale è adeguata e non c’è bisogno di introdurne una ad hoc come in UE : <Commenters who answered this question mostly found that existing laws are adequate to continue to protect AI-related databases and datasets and that there is no need for reconsidering a sui generis database protection law, such as exists in Europe. Furthermore, one commenter cautioned “that AI technology is developing rapidly and that any laws proposed now could be obsolete by the time they are enacted>, p. 37

Report del Parlamento UE sul nesso tra intelligenza artificiale (AI) e proprietà intellettuale (PI)

è uscito il <REPORT on intellectual property rights for the development of artificial intelligence technologies> (2020/2015(INI)) – A9-0176/2020 del 2 ottobre 2020, approvato dal Parlamento UE (Commissione on Legal Affairs-relatore Stéphane Séjourné).

Non ci sono grandi novità : ripercorre le principali preoccupazioni e/o esigenze, che chi si interessa di AI è ormai abituato a leggere.

Riporto alcuni passi dalla MOTION FOR A EUROPEAN PARLIAMENT RESOLUTION, p. 3 ss:

  • nota che i documenti della Commissione dello scorso anno sul tema dell’AI (v. mio post 20.02.2020) non tenevano conto della PI: <notes, however, that the issue of the protection of IPRs in the context of the development of AI and related technologies has not been addressed by the Commission, despite the key importance of these rights;>, § 1, p. 6.
  • eventuale legislazione dovrà essere tramite regolamento , non direttiva, § 3.
  • sullo streaming rileva <the importance of streaming services being transparent and responsible in their use of algorithms, so that access to cultural and creative content in various forms and  different languages as well as impartial access to European works can be better guaranteed;>, § 8
  • raccomanda un approccio settoriale e tipologico per la PI, § \0.
  • circa l’attuazione/enforcement, <acknowledges the potential of AI technologies to improve the enforcement of IPRs, notwithstanding the need for human verification and review, especially where legal consequences are concerned>, § 11;
  • sui non-personal data ,<is worried about the possibility of mass manipulation of citizens being used to destabilise democracies and calls for increased awareness-raising and media literacy as well as for urgently needed AI technologies to be made available to verify facts and information>, § 18;  e osserva che <AI technologies could be useful in the context of IPR enforcement, but would require human review and a guarantee that any AI-driven decision-making systems are fully transparent; stresses that any future AI regime may not circumvent possible requirements for open source technology in public tenders or prevent the interconnectivity of digital services>, § 18, ed ancora: <notes that AI systems are software-based and rely on statistical models, which may include errors; stresses that AI-generated output must not be discriminatory and that one of the most efficient ways of reducing bias in AI systems is to ensure – to the extent possible under Union law – that the maximum amount of non-personal data is available for training purposes and machine learning; calls on the Commission to reflect on the use of public domain data for such purposes>, § 18.

Dal seguente  EXPLANATORY STATEMENT, p. 12-13:

  • le domande di brevetto relato alla AI presso l’EPO sono più che triplicate in dieci anni;
  • AI è usata ad es. per la ricerca dello stato dell’arte;
  • rivalutare la PI alla luce dell’AI costituisce una priorità per le UE.

Intelligenza artificiale e machine learning: un ottimo sunto delle relative questioni da parte del Parlamento UK

Il Parlamento britannico pubblica un post riassuntivo delle principali caratteristiche dei fenomeni dell’intelligenza artificale (AI) e machine learning (ML): v. POSTNOTE, n° 633, October 2020 INTERPRETABLE MACHINE LEARNING.

Con la consueta chiarezza e precisione che contraddistinguono la comunicazione divulgativa nella cultura anglosassone.

Riporto solo i concetti di IA e ML (v. Box 1):

<< Artificial intelligence (AI)  – There is no universally agreed definition of AI. It is defined in the Industrial Strategy as “technologies with the ability to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language translation”. AI is useful for identifying patterns in large sets of data and making predictions.

Machine learning (ML) – ML is a branch of AI that allows a system to learn and improve from examples without all its instructions being explicitly programmed. An MLsystem is trained to carry out atask by analysing large amounts of training data and building a model that it can use toprocess future data, extrapolating its knowledge to unfamiliar situations. Applications of ML include virtual assistants (such as Alexa), product recommendation systems, and facial recognition. There is a range of ML techniques, but many experts attribute recent advances to developments in deep learning:

1) artificial neural networks (ANNs).Type of ML that have a designinspiredbythe way neurons transmit information in the human brain.17Multiple data processing units (nodes) are connected in layers, with the outputs of a previous layer used as inputs for the next.

2) deep learning (DL). Variation of ANNs. Uses a greater number of layers of artificial neurons to solve more difficult problems.16DL advances have improved areas such as voice and image recognition >>.

Il post si sofferma alquanto sulla “interpretabilità”. Tema importante, nei limiti in cui una decisione venga presa sulla base di AI/ML (diverrà dunque sempre più importante): il destinatario, per esaminarne la correttezza e valutarne l’eventuale impugnabilità, deve infatti senza troppa fatica comprenderne la motivazione.

Si legge ad es. <<Some stakeholders have said that ML that is not inherently interpretable should not be used in applications that could have a significant impact on an individual’s life (for example, in criminal justice decisions). The ICO and Alan Turing Institute have recommended that organisations prioritise using systems that use interpretable ML methods if possible, particularly for applications that have a potentially high impact on a person or are safety critical>> (p. 3).

Non è però chiaro perchè l’interpretability debba essere perseguita solo nelle decisioni più importanti e (a contrario) perchè si possa invece lasciare  nell’oscuro totale il destinatario in quelle meno importanti (come distinguere, poi, le prime dalle seconde?).

La Commissione UE sull’Intelligenza Artificiale (AI)

Sono stati da poco resi noti due documenti della Commisione sull’AI.

1) il Libro bianco On Artificial Intelligence – A European approach to excellence and trust del 19 febbraio 2020 COM(2020) 65 final .

Qui sono ricordati altri documenti interessanti:

– la Comunicazione della Commissione <<L’intelligenza artificiale per l’Europa>> del 25.04.2019, COM(2018) 237 final;

– i documenti prodotti dall’ High-Level Expert Group on Artificial Intelligence, e soprattutto: i) le Ethics Guidelines for Trustworthy Artificial Intelligence (AI) dell’8 aprile 2019  (qui v. anche il documento sulla definizione di Intelligenza Artificiale 08.04.2019) , nonchè ii) le Policy and investment recommendations for trustworthy Artificial Intelligence del 26 giugno 2019 .

2) il Technical Report dell’ European Commission Joint Research Centre (JRC) sui cruciali problemi della Robustness and Explainability of Artificial Intelligence,  2020, autori: HAMON Ronan-JNKLEWITZ Henrik-SANCHEZ MARTIN Jose Ignacio.