Ancora sulla responsabilità degli internet provider per le violazioni copyright dei loro utenti (con un cenno a Twitter v. Taamneh della Corte Suprema USA, 2023)

Approfondita sentenza (segnalata e linkata da Eric Goldman, che va sempre ringraziato) US BANKRUPTCY COURT-SOUTHERN DISTRICT OF NEW YORK, In re: FRONTIER COMMUNICATIONS CORPORATION, et al., Reorganized Debtors, Case No. 20-22476 (MG), del 27 marzo 2024.

Si v. spt. :

-sub III.A, p. 13 ss, “Secondary Liability for Copyright Infringement Is a Well-Established Doctrine”;

– sub III.B “Purpose and Effect of DMCA § 512”, 24 ss.

– sub III.D “Twitter Did Not Silently Rewrite Well-Established Jurisprudence on Secondary Liability for Copyright Infringement” p. 31 ss sul rapporto tra la disciplina delle violazioni copyright e la importante sentenza della Corte Suprema Twitter, Inc. v. Taamneh, 598 U.S. 471 (2023).

Di quest’ultima riporto due passaggi dal Syllabus iniziale:

– la causa petendi degli attori contro Twitter (e Facebook e Google):

<< Plaintiffs allege that defendants aided and abetted ISIS in the
following ways: First, they provided social-media platforms, which are
generally available to the internet-using public; ISIS was able to up-
load content to those platforms and connect with third parties on them.
Second, defendants’ recommendation algorithms matched ISIS-re-
lated content to users most likely to be interested in that content. And,
third, defendants knew that ISIS was uploading this content but took
insufficient steps to ensure that its content was removed. Plaintiffs do
not allege that ISIS or Masharipov used defendants’ platforms to plan
or coordinate the Reina attack. Nor do plaintiffs allege that defend-
ants gave ISIS any special treatment or words of encouragement. Nor
is there reason to think that defendants carefully screened any content
before allowing users to upload it onto their platforms>>

– La risposta della SCOTUS:

<<None of plaintiffs’ allegations suggest that defendants culpably “associate[d themselves] with” the Reina attack, “participate[d] in it as
something that [they] wishe[d] to bring about,” or sought “by [their]
action to make it succeed.” Nye & Nissen, 336 U. S., at 619 (internal
quotation marks omitted). Defendants’ mere creation of their media
platforms is no more culpable than the creation of email, cell phones,
or the internet generally. And defendants’ recommendation algorithms are merely part of the infrastructure through which all the content on their platforms is filtered. Moreover, the algorithms have been presented as agnostic as to the nature of the content. At bottom, the allegations here rest less on affirmative misconduct and more on passive nonfeasance. To impose aiding-and-abetting liability for passive nonfeasance, plaintiffs must make a strong showing of assistance and scienter.     Plaintiffs fail to do so.
First, the relationship between defendants and the Reina attack is
highly attenuated. Plaintiffs make no allegations that defendants’ relationship with ISIS was significantly different from their arm’s
length, passive, and largely indifferent relationship with most users.
And their relationship with the Reina attack is even further removed,
given the lack of allegations connecting the Reina attack with ISIS’ use
of these platforms. Second, plaintiffs provide no reason to think that
defendants were consciously trying to help or otherwise participate in
the Reina attack, and they point to no actions that would normally
support an aiding-and-abetting claim.
Plaintiffs’ complaint rests heavily on defendants’ failure to act; yet
plaintiffs identify no duty that would require defendants or other communication-providing services to terminate customers after discovering that the customers were using the service for illicit ends. Even if
such a duty existed in this case, it would not transform defendants’
distant inaction into knowing and substantial assistance that could
establish aiding and abetting the Reina attack. And the expansive
scope of plaintiffs’ claims would necessarily hold defendants liable as
having aided and abetted each and every ISIS terrorist act committed
anywhere in the world. The allegations plaintiffs make here are not
the type of pervasive, systemic, and culpable assistance to a series of
terrorist activities that could be described as aiding and abetting each
terrorist act by ISIS.
In this case, the failure to allege that the platforms here do more
than transmit information by billions of people—most of whom use the
platforms for interactions that once took place via mail, on the phone,
or in public areas—is insufficient to state a claim that defendants
knowingly gave substantial assistance and thereby aided and abetted
ISIS’ acts. A contrary conclusion would effectively hold any sort of
communications provider liable for any sort of wrongdoing merely for
knowing that the wrongdoers were using its services and failing to stop
them. That would run roughshod over the typical limits on tort liability and unmoor aiding and abetting from culpability>>.

La norma asseritamente violata dalle piattaforme era il 18 U.S. Code § 2333 (d) (2), secondo cui : <<2) Liability.— In an action under subsection (a) for an injury arising from an act of international terrorism committed, planned, or authorized by an organization that had been designated as a foreign terrorist organization under section 219 of the Immigration and Nationality Act (8 U.S.C. 1189), as of the date on which such act of international terrorism was committed, planned, or authorized, liability may be asserted as to any person who aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed such an act of international terrorism>>.

Apple è responsabile per i danni prodotti da una sua “malicious app” oppure è protetta dal safe harbour ex § 230 CDA?

Eric Goldman ci notizia di e ci dà il link alla sentenza di appello del 9 circuito 27.03.2024,  No. 22-16514, Hadona Diep v. Apple .

Dismissed le azioni “counts I (violation of the Computer
Fraud and Abuse Act), II (violation of the Electronic Communications Privacy
Act), III (violation of California’s Consumer Privacy Act), VI (violation of
Maryland’s Wiretapping and Electronic Surveillance Act), VII (additional
violation of Maryland’s Wiretapping and Electronic Surveillance Act), VIII
(violation of Maryland’s Personal Information Protection Act), and X (negligence)
of the complaint”.

Invece il  § 230 CDA non protegge da azioni basate sulle leggi statali proconsumatori nè da altra come comncorrenza sleale:

<<The claims asserted in counts IV (violation of California’s Unfair
Competition Law (“UCL”)), V (violation of California’s Legal Remedies Act
(“CLRA”)), and IX (liability under Maryland’s Consumer Protection Act
(“MCPA”)) are not barred by the CDA. These state law consumer protection
claims do not arise from Apple’s publication decisions as to whether to authorize
Toast Plus. Rather, these claims seek to hold Apple liable for its own
representations concerning the App Store and Apple’s process for reviewing the
applications available there. Because Apple is the primary “information content
provider” with respect to those statements, section 230(c)(1) does not apply. See
Carafano v. Metrosplash.com, Inc., 339 F.3d 1119, 1124–25 (9th Cir. 2003)
(examining which party “provide[d] the essential published content”)>>.

Nemmeno queste azioni sonio impedite da patti su Limitation of liability (anche se qui di non interesse)

Il safe harbor ex § 230 CDA per Youtube in caso di accessi abusivi frodatori ad account altrui

Interessante lite sull’applicabilità del § 230 CDA statunitense a Youtube per allegate violazioni informatiche a fini di frode in account altrui (tra cui quello di Steve Wozniak) .

E’ ora giunta la decisione del 6° appellate district 15 marzo 2024,  H050042
(Santa Clara County Super. Ct. No. 20CV370338), Wozniak e altri c. Youtube), che sostanzialmente conferma il rigetto del primo grado (lasciando aperta agli attori solo  una piccola finestra).

La fattispecie è interessante per l’operatore.  Gli attori avevano diligentemente cercato di aggirare il safe harbour, argomentando in vario modo che gli addebiti a Y. erano di fatti propri (cioè di Y.), anzichè di mera condotta editoriale di informazioni altrui (per cui opera il safe harbor). Ragioni che la corte (anzi gli attori) avevano raggruppato in sei categorie:

a. Negligent security claim

b. Negligent design claim

c. Negligent failure to warn claim

d. Claims based on knowingly selling and delivering scam ads and
scam video recommendations to vulnerable users

e. Claims based on wrongful disclosure and misuse of plaintiffs’
personal information

f. Claims based on defendants’ creation or development of
information materially contributing to scam ads and videos

Ma per la corte non si può arrivare a qualificarle come condotte proprie di Y.  ai sensi del § 230 CDA , ma solo dei terzi frodatori (sub ii Material contributions, 34 ss).

Concede però parziale Leave to amend, p. 36.

I profili allegati dagli attorei sono utili pure da noi, perchè il problema è sostanzialmente simile: quello del capire se le notizie lesive pubblicate possono dirsi solo del terzo oppure anche della piattaforma (art. 6 reg. UE DSA  2022/2065)

(notizia della e link alla sentenza dal blog di Eric Goldman)

Trib. Roma sulla responsabilità del provider per materuiali caricati dagli utenti

Eleonora Rosati su IPKat ci notizia di (e ci linka a) due sentenza 2023 di Trib. Roma sez. spec. impr. sull’oggetto, entrambe tra RTI (attore) e una piattaforma di hosting files (Vimeo e V Kontacte).

Le domande sono respinte, alla luce del precedente della Corte di Giustizia Cyando del 2021.

Si tratta di :

Trib. Roma 07.04.2023 n. 5700/2023, RG 59780/2017, Giudice rel. Picaro, RTI  c. Vimeo;

Trib. Roma 12.10.2023 n. 14531/2023, RG 4341/2027, giud. rel.: Cavaliere, RTI v. V Kontakte;

Per Rosati la lettura del pcedente europeo è errata.

Qui io solo evidenzio che i) civilisticamente non ha dignità giuridica da noi la distinzine tra responsabilità primaria e secondaria/indiretta nel caso di materiali illeciti caricati dagli utenti e ii) il safe harbour copre ogni responsabilità da esso conseguente.

Il punto più importante è che, per perdere il safe harbour, bisogna che il provider avesse contezza dell’esistenza degli specifici illeciti azionati, non di una loro generica possibilità.

Altra questione poi è quella del livello di dettaglio della denuncia al provider da parte del titolare dei diritti.   Per il Trib. deve essere elevato: ed è  esatto, stante il principio per cui onus probandi incumbit ei qui dicit , regola processuale che va applicata anche alla denuncia de qua (nè c’è ragione per caricare il provider di attività faticose e incerte, a meno che tali non siano più per ragioni ad es. di avanzamento tecnologico).

Access provider responsabile per le violazioni di copyright dei suoi utenti: non vicariously bensì contributory

Così l’analitica e interessante Sony, Arista, Warner Bros ed altri v. Cox Communications , 4° circuito d’ appello, n. 21.-1168, 20.02.2024 , promossa dalle major dell’industria culturale contro un access provider:

<<A defendant may be held vicariously liable for a third party’s copyright infringement if the defendant “[1] profits directly from the infringement and [2] has a right and ability to supervise the direct infringer.”>>

– I –

Vicarious liability:

<<As these cases illustrate, the crux of the financial benefit inquiry is whether a causal relationship exists between the infringing activity and a financial benefit to the defendant.
If copyright infringement draws customers to the defendant’s service or incentivizes them to pay more for their service, that financial benefit may be profit from infringement. See, e.g., EMI Christian Music Grp., Inc. v. MP3tunes, LLC, 844 F.3d 79, 99 (2d Cir. 2016).
But in every case, the financial benefit to the defendant must flow directly from the third party’s acts of infringement to establish vicarious liability. See Grokster, 545 U.S. at 930 & n.9; Nelson-Salabes, 284 F.3d at 513.
To prove vicarious liability, therefore, Sony had to show that Cox profited from its
subscribers’ infringing download and distribution of Plaintiffs’ copyrighted songs. It did not.
The district court thought it was enough that Cox repeatedly declined to terminate infringing subscribers’ internet service in order to continue collecting their monthly fees.
Evidence showed that, when deciding whether to terminate a subscriber for repeat infringement, Cox considered the subscriber’s monthly payments. See, e.g., J.A. 1499 (“This customer will likely fail again, but let’s give him one more chan[c]e. [H]e pays 317.63 a month.”). To the district court, this demonstrated the requisite connection between the customers’ continued infringement and Cox’s financial gain.
We disagree. The continued payment of monthly fees for internet service, even by repeat infringers, was not a financial benefit flowing directly from the copyright infringement itself. As Cox points out, subscribers paid a flat monthly fee for their internet access no matter what they did online. Indeed, Cox would receive the same monthly fees even if all of its subscribers stopped infringing. Cox’s financial interest in retaining subscriptions to its internet service did not give it a financial interest in its subscribers’ myriad online activities, whether acts of copyright infringement or any other unlawful acts.
An internet service provider would necessariily lose money if it canceled subscriptions, but that demonstrates only that the service provider profits directly from the sale of internet access. Vicarious liability, on the other hand, demands proof that the defendant profits directly from the acts of infringement for which it is being held accountable>>

– II –

<<We turn next to contributory infringement. Under this theory, “‘one who, with
knowledge of the infringing activity, induces, causes or materially contributes to the infringing conduct of another’ is liable for the infringement, too.”>>

<<The evidence at trial, viewed in the light most favorable to Sony, showed more than mere failure to prevent infringement. The jury saw evidence that Cox knew of specific instances of repeat copyright infringement occurring on its network, that Cox traced those instances to specific users, and that Cox chose to continue providing monthly internet access to those users despite believing the online infringement would continue because it wanted to avoid losing revenue. Sony presented extensive evidence about Cox’s increasingly liberal policies and procedures for responding to reported infringement on its
network, which Sony characterized as ensuring that infringement would recur. And the jury reasonably could have interpreted internal Cox emails and chats as displaying contempt for laws intended to curb online infringement. To be sure, Cox’s antiinfringement efforts and its claimed success at deterring repeat infringement are also in the record. But we do not weigh the evidence at this juncture. The evidence was sufficient to support a finding that Cox materially contributed to copyright infringement occurring on its network and that its conduct was culpable. Therefore we may not disturb the jury’s verdict finding Cox liable for contributory copyright infringement>>

(notizia e link dal blogi di Eric Goldman)

Appello Roma conferma la responsabilità di Vimeo quale hosting provider attivo nella lite con RTI

Sta già quasi scemando l’attenzione verso il tema della responsabilità del provider per gli illeciti commessi dagli utenti tramite la sua piattaforma.

Francesca Santoro su Altalex ci notizia di Trib. Roma n. 6532/2023 del 12/10/2023, RG n. 5367/2019, Vimeo c. RTI, rel. Tucci.

La corte segue l’orientamento dominante basato sulla infruibilità dell’esenzione da parte dell’hosting cd attivo, fatto proprio da Cass. 7708 del 2019.

Che però non convince: la correponsabilità è regolata dalla’rt. 2055 cc per cui la colpa deve riguardare le singole opere azionate. A meno di aprire ad un dolo eventuale o colpa con previsione: ma va argomentato in tale senso.

Più interessnte è il cenno alle techniche di fingerpringing , DA INQAUDRASRE APPUNTO NELLA FAttispecie di responsabilità aquiliana.  Solo che per far scattare l’esenzione ex art- 16 d lgs 70-2003 bisogna provare che era “effettivamente a conoscenza”: e non si può dire che, prima della diffida, Vimeo lo era o meglio che, adottando questa o quella misura, lo sarebbe certametne stato  (misure adottabili ma fino a che livello di costo, poi? Tema difficile: è come per le misure di sicurezze a carico delle banche. Solo che queste rischiano un inadempimento verso i correntisti/controparti contrattuali, mentre Vimeo rischia l’illecito aquiliano: e c’è differenza tra i due casi per la soglia di costo, se -nell’ipotesi contrattuale -nulla è stato patutito ?).

Il servizio (filtrato?) di Gmail può fruire del safe harbour ex § 230 CDA

Eric Goldman riferisce (e dà link al testo) di US District court-East. Distr. of California 24 agosto 2023, No. 2:22-cv-01904-DJC-JBP, Republican National Committee v. Google.

Il gruppo politico di destra accusa Google (G.) di filtraggi illegittimi delle sue mail.

G. si difende con successo eccependo il safe harbour ex 230.c.2.a (No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;)

Il punto critico è l’accertamento dei requisiti di objectionable  e della buona fede.

Come osserva Eric Goldman, interessante è pure la considerazione di policy della corte:

<<Section 230 also addresses Congress’s concern with the growth of unsolicited commercial electronic mail, and the fact that the volume of such mail can make email in general less usable as articulated in the CAN-SPAM Act. See 15 U.S.C. § 7701(a)(4), (6).   Permitting suits to go forward against a service provider  based on the over-filtering of mass marketing emails would discourage providers from offering spam filters or significantly decrease the number of emails segregated. It would also place courts in the business of micromanaging content providers’ filtering systems in contravention of Congress’s directive that it be the provider or user that determines what is objectionable (subject to a provider acting in bad faith). See 47 U.S.C. § 230(c)(2)(A) (providing no civil liability for “any action voluntarily taken in good faith to restrict access to . . . material that the provide or user considers to be . . . objectionable” (emphasis added)). This concern is exemplified by the fact that the study on which the RNC relies to show bad faith states that each of the three email systems had some sort of right- or left- leaning bias. (ECF No. 30-10 at 9 (“all [spam filtering algorithms] exhibited political biases in the months leading up to the 2020 US elections”).) While Google’s bias was greater than that of Yahoo or Outlook, the RNC offers no limiting principle as to how much “bias” is permissible, if any. Moreover, the study authors note that reducing the filters’ political biases “is not an easy problem to solve. Attempts to reduce the biases of [spam filtering algorithms] may inadvertently affect their efficacy.” (Id.) This is precisely the impact Congress desired to avoid in enacting the Communications Decency Act, and reinforces the conclusion that section 230 bars this suit>>.

SAfe harbour ex § 230 CDA in una azione contro piattaforma (di affitti brevi) VRBO per danni da incendio della casa affittata

Eric Goldman dà ntozia del caso deciso dalla Dis. Court East. Dist. NY 29 sett. 2023, 22-cv-7081 (GRB)(ARL), Eiener c. Mlller e VRBO.

La domanda erso VRBO è rigettata in limine per il cit. safe harbour.

Si tratta di un hosting provider .

la curiosità sta nel fatto che qui il danno è solo indirettametne connesso alla piattaforma dato che deriva dalla res ivi pubblicizzata, non dalla informazione in sè di affittabilità.

O meglio: gli attori allegano che l’informazione on line era errata e/o insufficiente.

Ebbene, dA noi rientra tale fattispecie nella disciplina della resp. del provider, ora regolata dagli artt. 4 segg. del DSA reg. UE 2065 del 2022?

La sospensione dell’account Twitter è coperto dal safe habour ex § 230 CDA (con una notazione per il diritto UE)

Distr. Court of california 23 agosto 2023, Case No. 23-cv-00980-JSC., Zhang v. Twitter, rigetta la domanda dell’utente Twitter per presenza del safe harbor.

Regola ormai pacifica tanto che viene da cheidersi come possa uin avvocato consugliuare la lite (nel caso però Zhang aveva agito “representing himself”)

Qui segnalo solo la (fugace) illustazione del motivo per cui T. non è il fornitore delle informaizonie  e quindi ricorre il requisito di legge

<<Second, Plaintiff seeks to hold Twitter liable for decisions regarding “information provided by another information content provider”—that is, information he and the third-party user, rather than Twitter, provided. Plaintiff’s argument Twitter is itself “an information content provider” of the third-party account holder’s content within the meaning of Section 230(f)(3) is misplaced. (Dkt. No. 53 at 21-22.) Section 230(f)(3) defines “information content provider” as “any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.”

Plaintiff appears to argue Twitter’s placement of information in “social media feeds” renders it an information content provider.

Not so. “[P]roliferation and dissemination of content does not equal creation or development of content.” Kimzey v. Yelp! Inc., 836 F.3d 1263, 1271 (9th Cir. 2016); see also Fair Hous. Council of San Fernando Valley v. Roommates.Com, LLC, 521 F.3d 1157, 1174 (9th Cir. 2008) (finding Section 230 immunity applies where the interactive computer service provider “is not responsible, in whole or in part, for the development of th[e] content, which comes entirely from subscribers and is passively displayed by [the interactive computer service provider].”)>>.

Si veda la corrispondente disposizione del digital services act, art. 6 reg. ue 2022/2065, e le tante sentenze  emesse in Italia ex art. 16 e 17 d. lgs 70/2003.

(notizia e link alla sentenza dal blog del prof. Eric Goldman)

Superare il safe harbour ex § 230 CDA di FAcebook allegando che il suo algoritmo ha contribuito a raicalizzare l’assassino

Il prof. Eric Goldman ricorda una sentenza del Distretto Sud California-Charleston 24 luglio 2023 che rigetta per safe harbour una domanda di danni verso Meta proposta dai parenti di una vittima dell’eccidio compiuto da Dylan Roof nel 2015 alla chiesa di Charleston.

Purtroppo non c’è link al testo ma c’è quello alla citazione introttiva. Nella quale è ben argomentata la ragione del superamento della posizione passiva di FAcebook.

Può essere utile anche da noi ove però superare la specificità della prevedibilità da parte della piattaforma non è facile (ma come colpa con previsione forse si)