Lo studente che lascia diffamare i docenti dando le credenziali del social a suoi amici, autori dei post, non è protetto dal safe harbor ex 230 CDA

L’appello del 6° circuito, n° 22-1748, JASON KUTCHINSKI c. FREELAND COMMUNITY SCHOOL DISTRICT; MATTHEW A. CAIRY and TRACI L. SMITH , decide una lite promossa dall’alunno impugnante la sanzione disciplinare irrogatagli per aver dato le credenziali Instagram ad amici , autori di post diffamatori di docenti della scuola.

L’alunno non è infatti qualificabile come publisher o spealker, essendo invece coautore della condotta dannosa:

<<Like the First, Fourth, and Ninth Circuits, we hold that when a student causes, contributes to, or affirmatively participates in harmful speech, the student bears responsibility for the harmful speech. And because H.K. contributed to the harmful speech by creating the Instagram account, granting K.L. and L.F. access to the account, joking with K.L. and L.F. about their posts, and accepting followers, he bears responsibility for the speech related to the Instagram account.
Kutchinski disagrees and makes two arguments. First, Kutchinski argues that Section 230 of the Communications Decency Act, 47 U.S.C. § 230, bars Defendants from disciplining H.K. for the posts made by K.L. and L.F.     This is incorrect. Under § 230(c)(1), “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” To the extent § 230 applies, we do not treat H.K. as the “publisher or speaker” of the posts made by K.L. and L.F. Instead, we have found that H.K. contributed to the harmful speech through his own actions>>.

Che poi aggiunge:

<<Second, Kutchinski argues that disciplining H.K. for the posts emanating from the Instagram account violates H.K.’s First Amendment freedom-of-association rights. “The First Amendment . . . restricts the ability of the State to impose liability on an individual solely because of his association with another.” NAACP v. Claiborne Hardware Co., 458 U.S. 886, 918–19 (1982). “The right to associate does not lose all constitutional protection merely because some members of the group may have participated in conduct or advocated doctrine that itself is not protected.” Id. at 908. But Defendants did not discipline H.K. because he associated with K.L. and L.F. They determined that H.K. jointly participated in the wrongful behavior. Thus, Defendants did not impinge on H.K.’s freedom-of-association rights>>.

(notizia e link alla sentenza dal blog del prof. Eric Goldman)

Approfondimenti sul safe harbour ex § 230 CDA (casinò virtuali , responsabilità editoriale e compartecipazione all’illecito)

Il distretto nord della California, S. Josè division,  2 settembre 2022, Case No. 5:21-md-02985-EJD , Case No. 5:21-md-03001-EJD e Case No. 5:21-cv-02777-EJD, decide una lite promossa per putative class action verso le major tecnologiche Apple, Google e Facebook per violazione di diverse norme di consumer protection.

In particolare le accusa -in concorso con i gestori di cyber casinò- di aver fatto perdere soldi agli utenti promuovendo attivamente applicazioni di giochi a denaro (casino), meglio detti <social casinos applications>

Le major ovviamente eccepiscono il safe harbour in oggetto.

La corte entra nel dettaglio sia del business dei casinò virtuali sia della storia del § 230 CDA.

Quello che qui però interessa è la qualificazione della domanda proposta.

Infatti solo se l’attore tratta le convenute come speaker/publisher, queste fruire del safe harbour. E delle tre possibili teorie di responsabilità propettate dall’attore, una (la seconda) viene ritenuta di responsabilità per fatto proprio anzichè editoriale: per questa dunque il safe harbour non opera.

In particolare: << Unlike Plaintiffs’ first theory of liability, which attempts to hold the Platforms liable in
their “editorial” function, Plaintiffs’ second theory of liability seeks to hold the Platforms liable
for their own conduct. Importantly, the conduct identified by Plaintiffs in their complaints is
alleged to be unlawful. As alleged, players must buy virtual chips from the Platforms app stores
and may only use these chips in the casino apps. It is this sale of virtual chips that is alleged to be
illegal. Plaintiffs neither take issue with the Platforms’ universal 30% cut, nor the Platforms’
virtual currency sale. Plaintiffs only assert that the Platforms role as a “bookie” is illegal.
Plaintiffs therefore do not attempt to treat the Platforms as “the publisher or speaker” of thirdparty content, but rather seek to hold the Platforms responsible for their own illegal conduct—the
sale of gambling chips.
Compare Taylor v. Apple, Inc., No. 46 Civ. Case 3:20-cv-03906-RS (N.D.
Cal. Mar. 19, 2021) (“Plaintiffs’ theory is that Apple is distributing games that are effectively slot
machines—illegal under the California Penal Code. . . . Plaintiffs are seeking to hold Apple liable
for selling allegedly illegal gaming devices, not for publishing or speaking information.”),
with
Coffee v. Google, LLC
, 2022 WL 94986, at *6 (N.D. Cal. Jan. 10, 2022) (“In the present case,
Google’s conduct in processing sales of virtual currency is not alleged to be illegal. To the
contrary, the [Complaint] states that ‘[v]irtual currency is a type of
unregulated digital currency
that is only available in electronic form.’ If indeed the sale of Loot Boxes is illegal, the facts
alleged in the FAC indicate that such illegality
is committed by the developer who sells the Loot
Box for virtual currency, not by Google
.” (second alteration in original) (emphasis added)) ….

The Court holds that Plaintiffs’ first and third theories of liability must be dismissed under
section 230. However, Plaintiffs’ second theory of liability is not barred by section 230. The
Court thus GRANTS in part and DENIES in part Defendants’ respective motions to dismiss. 
>>

E’ una questione assai interssante di teoria civilistica quella di capire quando ricorra responsabilità vicaria o per concorso paritario nel fatto altrui o responsabilità solo editoriale.    Interessante anche perchè è alla base della discplin armonizzata UE della  responsabilità del provider.

(notizia e link alla sentenza da blog del prof Eric Goldman)

Safe harbour ex 230 CDA per l’omesso avviso e l’omessa rimozione di materiale sensibile? Si.

La madre di un bambino le cui immagini sessualmente significative avevva notato caricate su Tikl Tok cita la piattaforma per i segg. illeciti: did not put any warning on any of the videos claiming they might contain sensitive material; did not remove any of the videos from its platform; did not report the videos to any child abuse hotline; did not sanction, prevent, or discourage the videos in any way from being viewed, shared, downloaded or disbursed in any other way; and “failed to act on their own policies and procedures along with State and Federal Statutes and Regulations.

Il distretto nord dell’Illinois, west. division, 28.02.2022, Case No. 21 C 50129, Day c. Tik Tok, accoglie l’eccezione di safe harbour ex § 230 CDA sollevata dalla piattaforma (e citando il noto precedente Craiglist del 2008):

What § 230(c)(1) says is that an online information system must not ‘be treated as the publisher or speaker of any information provided by’ someone else.” Chicago Lawyers’ Committee for Civil Rights Under Law, Inc. v. Craigslist, Inc., 519 F.3d 666, 671 (7th Cir. 2008).
In Chicago Lawyers’, plaintiff sought to hold Craigslist liable for postings made by others on its platform that violated the anti-discrimination in advertising provision of the Fair Housing Act (42 U.S.C. § 3604(c)). The court held 47 U.S.C. § 230(c)(1) precluded Craigslist from being  liable for the offending postings because “[i]t is not the author of the ads and could not be treated as the ‘speaker’ of the posters’ words, given § 230(c)(1).” Id. The court rejected plaintiff’s argument that Craigslist could be liable as one who caused the offending post to be made stating “[a]n interactive computer service ‘causes’ postings only in the sense of providing a place where people can post.” Id. “Nothing in the service craigslist offers induces anyone to post any particular listing or express a preference for discrimination.” Id. “If craigslist ‘causes’ the discriminatory notices, then, so do phone companies and courier services (and, for that matter, the firms that make the computers and software that owners use to post their notices online), yet no one could think that Microsoft and Dell are liable for ‘causing’ discriminatory advertisements.” Id. at 672. The court concluded the opinion by stating that plaintiff could use the postings on Craigslist to identify targets to investigate and “assemble a list of names to send to the Attorney General for prosecution. But given § 230(c)(1) it cannot sue the messenger just because the message reveals a third party’s plan to engage in unlawful discrimination.”

Ed allora la domanda attorea nel caso specifico < does not allege defendant created or posted the videos. It only alleges defendant allowed and did not timely remove the videos posted by someone else. This is clearly a complaint about “information provided by another information content provider” for which defendant cannot be held liable by the terms of Section 230(c)(1).>

Difficile dar torto alla corte, alla luce del dettato della disposizione invocata da TikTok

(notizia e link alla sentenza dal blog del prof. Eric Goldman)

Ancora su safe harbour ex § 230 CDA e Twitter

Una modella (M.) si accorge di alcune sua foto intime pubblicate su Twitter (T.) da un soggetto editoriale (E.) di quel settore.

Chiede pertanto a T. la rimozione delle foto, dei tweet e la sospensione dell’account.

T. l’accontenta solo sul primo punto.

Allora M. agisce verso T. e E. , azionando:  <<(1) copyright infringement; (2) a violation of FOSTA-SESTA, 18 U.S.C. 1598 (named for the Allow States and Victims to Fight Online Sex Trafficking Act and Stop Online Sex Trafficking Act bills); (3) a violation of the right of publicity under Cal. Civ. Code § 3344; (4) false advertising under the Lanham Act; (5) false light invasion of privacy; (6) defamation, a violation under Cal. Civ. Code § 44, et seq.; (7) fraud in violation of California’s Unfair Competition Law, Cal. Bus. & Prof. Code § 1 17200 et seq.; (8) negligent and intentional infliction of emotional distress; and (9) unjust enrichment>>

Decide  US D.C. Central District della California 19.02.2021 , caso CV 20-10434-GW-JEMx,  Morton c. Twitter+1.

Manco a dirlo T eccepisce l’esimente ex § 230 CDA per tutti i claims tranne quello di copyright.

E’  sempre problematico il requisito del se l’attore tratti o meno il convenuto come publisher o speaker: conta la sostanza, non il nome adoperato dall’attore. Cioè la domanda va qualfiicata dal giudice, p. 5.

M. cerca di dire che E. non è terzo ma affiliato a T. . La corte rigetta, anche se di fatto senza motivazione, pp.-5-6. Anche perchè sarebbe stato più appropriato ragionarci sul requisito del se sia materiale di soggetto “terzo”, non sul se sia trattato come publisher.

IL punto più interessante è la copertura col § 230 della domanda contrattuale, 7 ss.

M. sostiene di no: ma invano perchè la corte rigetta per safe harbour e per due ragioni, p. 7/8:

Primo perchè M. non ha indicato una clausola contrattuale  che obbligasse T. a sospendere gli account offensivi: la clausola c’è, ma è merely aspirational, non binding.

Secondo , perchè la richiesta di sospendere l’account implica decisione editoriale, per cui opera l’esimente: <<“But removing content is something publishers do, and to impose liability on the basis of such conduct necessarily involves treating the liable party as a publisher of the content it failed to remove.” Barnes, 570 F.3d at 1103 (holding that Section 230 barred a negligent-undertaking claim because “the duty that Barnes claims Yahoo violated derives from Yahoo’s conduct as a publisher – the steps it allegedly took, but later supposedly abandoned, to de-publish the offensive profiles”)>>, p .8.

E’ il punto teoricamente più  interessante: la condotta censurata costituisce al tempo stesso sia  (in-)adempimento contrattuale che decisione editoriale. Le due qualificazione si sovrappongono.

Invece la lesione dell’affidamento  (promissory estoppel) non è preclusa dall’esimente, per cui solo qui M. ottiene ragione: <<This is because liability for promissory estoppel is not necessarily for behavior that is identical to publishing or speaking (e.g., publishing defamatory material in the form of SpyIRL’s tweets or failing to remove those tweets and suspend the account). “[P]romising . . . is not synonymous with the performance of the action promised. . . . one can, and often does, promise to do something without actually doing it at the same time.” Barnes, 570 F.3d at 1107. On this theory, “contract liability would come not from [Twitter]’s publishing conduct, but from [Twitter]’s manifest intention to be legally obligated to do something, which happens to be removal of material from publication.” Id. That manifested intention “generates a legal duty distinct from the conduct at hand, be it the conduct of a publisher, of a doctor, or of an overzealous uncle.” Id>>

(sentenze e link dal blog di Eric Goldman)

“Speed filter” di Snapchat tra negligenza e safe harbour ex § 230 CDA

La funzione Speed Filter di Snapchat permette di registrare le velocità tenuta dal veicolo e inserirla in una fotografia (per successivo posting).

Naturalmente farlo mentre si guida  è pericolosissimo.

In un incidente causato proprio da questo e dall’alta velocità, il danneggiato cita in giudizio l’altro conducente e Snapcht (poi: S.) per negligence.

S. si difende anche con l’invocazione del safe harbour ex § 230 CDA, unico profilo qui considerato.

In primo grado l’eccezione viene accolta. Si v. IN THE STATE COURT OF SPALDING COUNTY STATE OF GEORGIA, 20.01.2017, file n° 16-SV-89, Maynard v. McGee.Sanapchat.

Il ragionamento condotto dal giudice non è molto comprensibile. Il § 230 chiede infatti che si tratti di informazione proveniente da terzi e che si consideri il provider come “publisher or editor”: ma nessuno dei due requisiti ricorre qui.

Infatti in appello questo capo di sentenza viene riformato.

la Court of Appeals of Georgia chiarisce che i casi invocati per fruire del § 230 CDA (Barnes , Fields, Backpage) riguardano tutti fattispecie di danno provocato da post di utenti terzi.  Nel caso in esame, al contrario, <there was no third-party content uploaded to Snapchat at the time of the accident and the Maynards do not seek to hold Snapchat liable for publishing a Snap bya third-party that utilized the Speed Filter. Rather, the Maynards seek to hold Snap chat liable for its own conduct, principally for the creation of the Speed Filter and its failure to warn users that the Speed Filter could encourage speeding and unsafe driving practices. Accordingly, we hold that CDA immunity does not apply because there was no third-party user content published>> ( Corte di Appello della Georgia, 5 giugno 2018, A18A0749. MAYNARD etal. v. SNAPCHAT, INC., p. 9-10 ). Affermazione esatta.

Tornata in primo grado, la causa prosegue solo in punto di negligenza extracontrattuale: secondo il danneggiato, S. avrebbe dovuto prevedere la pericolosità del servizio offerto agli utenti e avvisarli adeguatamente (in pratica: da prodotto difettoso).

Rigiunta in appello solo sulla negligence, la Corte afferma che non c’è responsabilità di S. dato che, da un lato, <there is no “general legal duty to all the world not to subject others to an unreasonable risk of harm>, e, dall’altro, non c’è una special relationship che giustifichi un dovere di protezione, p. 6.  In breve <Georgia law does not impose a general duty to prevent people from committing torts while misusing a manufacturer’s product. Although manufacturers have “a duty to exercise reasonable care in manufacturing its products so as to make products that are reasonably safe for intended or foreseeable uses,” this duty does not extend to the intentional (not accidental) misuse of the product in a tortious way by a third party>, (Corte di Appello della Georgia 30.10.2020, n° 20A1218. MAYNARD et al. v. SNAPCHAT, INC., DO-044,  p. 7)

C’è però giurisprudenza contraria sull’invocabilità del § 230 CDA. Per una sentenza che in un caso uguale (preteso concorso dello Speed Filter di Snapchat alla causazione dell’incidente stradale) spiega in dettaglio tale invocabilità, vedasi  US District Court – Central District of California, 25.02.2020, Carly Lemmon v. Snapchat, n° CV 19-4504-MWF (KSx) , sub III.B.

Questa Corte segue la tesi per cui <<other courts have determined that CDA immunity applies where the website merely provides a framework that could be utilized for proper or improper purposes by the user. See, e.g., Carafano v. Metrosplash.com, Inc., 339 F.3d 1119, 1125 (9th Cir. 2003) (CDA immunity applies to a dating website even though some of the content was formulated in response to the website’s questionnaire because “the selection of the content was left exclusively to the user”) (emphasis added); Goddard v. Google, Inc., 640 F. Supp. 2d 1193, 1197 (N.D. Cal. 2009) (CDA immunity applies where the plaintiff alleged that Google’s suggestion tool, which used an algorithm to suggest specific keywords to advertisers, caused advertisers to post unlawful advertisements more frequently)>>, p. 11.

E applicando al caso specifico,  conclude che <<the Speed Filter is a neutral tool, which can be utilized for both proper and improper purposes. The Speed Filter is essentially a speedometer tool, which allows Defendant’s users to capture and share their speeds with others. The Speed Filter can be used at low or high speeds, and Defendant does not require any user to Snap a high speed. While Plaintiffs allege that some users believe that they will be rewarded by recording a 100-MPH or faster Snap, they do not allege that Snap actually rewards its users for doing so. In fact, when a user first opens the Speed Filter, a warning appears on the app stating “Please, DO NOT Snap and drive.” (RJN, Ex. A). When a user’s speed exceeds 15 m.p.h., another similar warning appears on the app. (RJN, Ex. B). While a user might use the Speed Filter to Snap a high number, the selection of this content (or number) appears to be entirely left to the user, and based on the warnings, capturing the speed while driving is in fact discouraged by Defendant.>>, p. 11 .

Il punto però è che il § 230 CDA richiede che la responsabilità derivi da informazione proveniente da terzo rispetto all’internet provider invocante il safe harbour: il che non ricorre nelle azioni basate sull’uso di Speed Filter.

Notizia dei casi presa dal blog di Eric Goldman.

Twitter è esente da responsabilità diffamatoria, fruendo del safe harbour ex § 230 CDA statunitense

Altra decisione che esenta Twitter da responsabilità diffamatoria sulla base del § 230 Communication Decency Act CDA.

Si tratta di US DISTRICT COURT EASTERN DISTRICT OF NEW YORK del 17 settempbre 2020, MAYER CHAIM BRIKMAN (RABBI) ed altri c. Twitter e altro, caso 1:19-cv-05143-RPK-CLP.  Ne dà notizia l’aggiornato blog di Eric Goldman.

Un rabbino aveva citato Twitter (e un utente che aveva retwittato)  per danni e injunction, affermando che Twitter aveva ospitato e non rimosso un finto account della sinagoga, contenente post offensivi. Dunque era responsabile del danno diffamatorio.

Precisamente: <<they claim that through “actions and/or inactions,” Twitter has “knowingly and with malice . . . allowed and helped non-defendant owners of Twitter handle @KnesesG, to abuse, harras [sic], bully, intimidate, [and] defame” plaintiffs. Id. ¶¶ 10-12. Plaintiffs aver that by allowing @KnesesG to use its platform in this way, Twitter has committed “Libel Per Se” under the laws of the State of New York. Ibid. As relevant here, they seek an award of damages and injunctive relief that would prohibit Twitter from “publishing any statements constituting defamation/libel . . . in relation to plaintiffs.”>>.

L’istanza è respinta in base al safe harbour presente nel § 230 CDA.

Vediamo il passaggio specifico.

Il giudice premette (ricorda) che i requisiti della fattispecie propria dell’esimente sono i soliti tre:  i) che sia un internet provider; ii) che si tratti di informazioni provenienti da terzo; iii) che la domanda lo consideri “as the publisher or speaker of that information” e cioè come editore-

Pacificamente presenti i primi due, andiamo a vedere il terzo punto, qui il più importante e cioè quello della prospettazione attorea come editore.

<<Finally, plaintiffs’ claims would hold Twitter liable as the publisher or speaker of the information provided by @KnesesG. [NB: il finto account della sinagoga contenente post offensivi].  Plaintiffs allege that Twitter has “allowed and helped” @KnesesG to defame plaintiffs by hosting its tweets on its platform … or by refusing to remove those tweets when plaintiffs reported them …  Either theory would amount to holding Twitter liable as the “publisher or speaker” of “information provided by another information content provider.” See 47 U.S.C. § 230(c)(1). Making information public and distributing it to interested parties are quintessential acts of publishing. See Facebook, 934 F.3d at 65-68.

Plaintiffs’ theory of liability would “eviscerate Section 230(c)(1)” because it would hold Twitter liable “simply [for] organizing and displaying content exclusively provided by third parties.” … Similarly, holding Twitter liable for failing to remove the tweets plaintiffs find objectionable would also hold Twitter liable based on its role as a publisher of those tweets because “[d]eciding whether or not to remove content . . . falls squarely within [the] exercise of a publisher’s traditional role and is therefore subject to the CDA’s broad immunity.” Murawski v. Pataki, 514 F. Supp. 2d 577, 591 (S.D.N.Y. 2007); see Ricci, 781 F.3d at 28 (finding allegations that defendant “refused to remove” allegedly defamatory content could not withstand immunity under the CDA).

Plaintiff’s suggestion that Twitter aided and abetted defamation “[m]erely [by] arranging and displaying others’ content” on its platform fails to overcome Twitter’s immunity under the CDA because such activity “is not enough to hold [Twitter] responsible as the ‘developer’ or ‘creator’ of that content.” … Instead, to impose liability on Twitter as a developer or creator of third-party content—rather than as a publisher of it—Twitter must have “directly and materially contributed to what made the content itself unlawful.” Id. at 68 (citation and internal quotation marks omitted); see, e.g., id. at 69-71 (finding that Facebook could not be held liable for posts published by Hamas because it neither edited nor suggested edits to those posts); Kimzey v. Yelp! Inc., 836 F.3d 1263, 1269-70 (9th Cir. 2016) (finding that Yelp was not liable for defamation because it did “absolutely nothing to enhance the defamatory sting of the message beyond the words offered by the user”) (citation and internal quotation marks omitted); Nemet Chevrolet, Ltd. v. Consumeraffairs.com, Inc., 591 F.3d 250, 257 (4th Cir. 2009) (rejecting plaintiffs’ claims because they “[did] not show, or even intimate” that the defendant “contributed to the allegedly fraudulent nature of the comments at issue”) (citation and internal quotation marks omitted); see also Klayman v. Zuckerberg, 753 F.3d 1354, 1358 (D.C. Cir. 2014) (“[A] website does not create or develop content when it merely provides a neutral means by which third parties can post information of their own independent choosing online.”).

Plaintiffs have not alleged that Twitter contributed to the defamatory content of the tweets at issue and thus have pleaded no basis upon which it can be held liable as the creator or developer of those tweets. See Goddard v. Google, Inc., No. 08-cv-2738 (JF), 2008 WL 5245490, at *7 (N.D. Cal. Dec. 17, 2008) (rejecting plaintiff’s aiding and abetting claims as “simply inconsistent with § 230” because plaintiff had made “no allegations . . . that Google ‘developed’ the offending ads in any respect”); cf. LeadClick, 838 F.3d at 176 (finding defendant was not entitled to immunity under the CDA because it “participated in the development of the deceptive content posted on fake news pages”).

Accordingly, plaintiffs’ defamation claims against Twitter also satisfy the final requirement for CDA preemption: the claims seek to hold Twitter, an interactive computer service, liable as the publisher of information provided by another information content provider, @KnesesG>>.

Interessante è che l’allegazione censurava non solo l’omessa rimozione ma pure il semplice hosting del post: forse mescolando fatti relativi alla perdita delll’esimente (responsabilità in negativo) con quelli relativi alla responsabilità in positivo.