La citazione in giudizio dell’associazione scrittori usa contro Open AI

E’ reperibile in rete (ad es qui) la citazione in giuidizio avanti il South. Dist. di New Yoerk contro Open AI per vioalzione di copyright proposta dalla importante Autorhs Guild e altri (tra cui scrittori notissimi) .

L’allenamento della sua AI infatti pare determini riproduzione e quindi (in assenza di eccezione/controdiritto) violazione.

Nel diritto UE l’art. 4 della dir 790/2019 presuppone il diritto  di accesso all’opera per  invocare l’eccezione commerciale di text and data mining:

<< 1. Gli Stati membri dispongono un’eccezione o una limitazione ai diritti di cui all’articolo 5, lettera a), e all’articolo 7, paragrafo 1, della direttiva 96/9/CE, all’articolo 2 della direttiva 2001/29/CE, all’articolo 4, paragrafo 1, lettere a) e b), della direttiva 2009/24/CE e all’articolo 15, paragrafo 1, della presente direttiva per le riproduzioni e le estrazioni effettuate da opere o altri materiali cui si abbia legalmente accesso ai fini dell’estrazione di testo e di dati.

2. Le riproduzioni e le estrazioni effettuate a norma del paragrafo 1 possono essere conservate per il tempo necessario ai fini dell’estrazione di testo e di dati.

3. L’eccezione o la limitazione di cui al paragrafo 1 si applica a condizione che l’utilizzo delle opere e di altri materiali di cui a tale paragrafo non sia stato espressamente riservato dai titolari dei diritti in modo appropriato, ad esempio attraverso strumenti che consentano lettura automatizzata in caso di contenuti resi pubblicamente disponibili online.

4. Il presente articolo non pregiudica l’applicazione dell’articolo 3 della presente direttiva>>.

Il passaggio centrale (sul se ricorra vioalzione nel diritto usa) nella predetta citazione sta nei §§ 51-64:

<<51. The terms “artificial intelligence” or “AI” refer generally to computer systems designed to imitate human cognitive functions.
52. The terms “generative artificial intelligence” or “generative AI” refer specifically to systems that are capable of generating “new” content in response to user inputs called “prompts.”
53. For example, the user of a generative AI system capable of generating images
from text prompts might input the prompt, “A lawyer working at her desk.” The system would then attempt to construct the prompted image. Similarly, the user of a generative AI system capable of generating text from text prompts might input the prompt, “Tell me a story about a lawyer working at her desk.” The system would then attempt to generate the prompted text.
54. Recent generative AI systems designed to recognize input text and generate
output text are built on “large language models” or “LLMs.”
55. LLMs use predictive algorithms that are designed to detect statistical patterns in the text datasets on which they are “trained” and, on the basis of these patterns, generate responses to user prompts. “Training” an LLM refers to the process by which the parameters that define an LLM’s behavior are adjusted through the LLM’s ingestion and analysis of large
“training” datasets.
56. Once “trained,” the LLM analyzes the relationships among words in an input
prompt and generates a response that is an approximation of similar relationships among words in the LLM’s “training” data. In this way, LLMs can be capable of generating sentences, p aragraphs, and even complete texts, from cover letters to novels.
57. “Training” an LLM requires supplying the LLM with large amounts of text for
the LLM to ingest—the more text, the better. That is, in part, the large in large language model.
58. As the U.S. Patent and Trademark Office has observed, LLM “training” “almost
by definition involve[s] the reproduction of entire works or substantial portions thereof.”4
59. “Training” in this context is therefore a technical-sounding euphemism for
“copying and ingesting.”
60. The quality of the LLM (that is, its capacity to generate human-seeming responses
to prompts) is dependent on the quality of the datasets used to “train” the LLM.
61. Professionally authored, edited, and published books—such as those authored by Plaintiffs here—are an especially important source of LLM “training” data.
62. As one group of AI researchers (not affiliated with Defendants) has observed,
“[b]ooks are a rich source of both fine-grained information, how a character, an object or a scene looks like, as well as high-level semantics, what someone is thinking, feeling and how these states evolve through a story.”5
63. In other words, books are the high-quality materials Defendants want, need, and have therefore outright pilfered to develop generative AI products that produce high-quality results: text that appears to have been written by a human writer.
64. This use is highly commercial>>