SAfe harbour ex § 230 CDa per danni da database informativo su privati messo in vendita?

Dice di no l’appello del 4 circuito n. 21-1678, TYRONE HENDERSON, SR e altri c. THE SOURCE FOR PUBLIC DATA, L.P. (dal distretto est della Virginia)

Attività dei convenuti:

<< Public Data’s business is providing third parties with information about individuals.
Plaintiffs allege that it involves four steps.
First, Public Data acquires public records, such as criminal and civil records, voting
records, driving information, and professional licensing. These records come from various
local, state, and federal authorities (and other businesses that have already collected those
records).
Second, Public Data “parses” the collected information and puts it into a proprietary
format. This can include taking steps to “reformat and alter” the raw documents, putting
them “into a layout or presentation [Public Data] believe[s] is more user-friendly.” J.A.
16. For criminal records, Public Data “distill[s]” the data subject’s criminal history into
“glib statements,” “strip[s] out or suppress[es] all identifying information relating to the
charges,” and then “replace[s] this information with [its] own internally created summaries
of the charges, bereft of any detail.” J.A. 30.
Third, Public Data creates a database of all this information which it then
“publishes” on the website PublicData.com. Public Data does not look for or fix
inaccuracies in the database, and the website disclaims any responsibility for inaccurate
information. Public Data also does not respond to requests to correct or remove inaccurate
information from the database.
Fourth, Public Data sells access to the database, “disbursing [the] information . . .
for the purpose of furnishing consumer reports to third parties.” J.A. 19. All things told,
Plaintiffs allege that Public Data sells 50 million consumer searches and reports per year.
Public Data knows that traffic includes some buyers using its data and reports to check
creditworthiness and some performing background checks for employment pURPOSE
>>

La domanda di danno è basata su violazioni di alcune disposizioni del Fair Credit Reporting Act (“FCRA”), anche  ma non    solo di tipo data protection.

L’invocazione del safe harbout è rigettata su due dei tre requisiti di legge.

RAvvisata la qualità di  internet provider, è però negato sia (per alcuni claims)  che venisse trattaato come publisher o speaker sia (per altri claims) che le infomazioni fossero di terzi.

Analisi dettagliata ma forse nell’esito poco condivisibile.

Le informazioni erano pur sempre tutte di terzi, solo che il convenuto le formattava in modalità più fruibili ai propri scopi (magari con qualche omissione …)-

Soprattutto, dir che non erano trattati come puiblisher/speaker è dubbio.

 

(notizia e link alla sentenza dal blog del prof. Eric Goldman)

Discriminazione e safe harbour ex § 230 Cda in Facebook

LA Eastern district of Pennsylvania 30.09.2022  Case 2:21-cv-05325-JHS D , Amro Ealansari c. Meta, rigtta la domanda volta a censurare presunta discriminazione da parte di Facebook verso materiali islamici ivi caricati.

E’ rigettata sia nel merito , non avendo provato discrimnazione nè che F. sia public accomodation (secondo il Civil Rights Act),  sia in via pregiudiziale per l’esimente ex § 230 CDA.

Nulla di particolarmente interessante e innovativo

(notizia e  link alla sentenza dal blog del prof Eric Goldman)

Azione contrattuale contro Facebook parzialmente protetta dal safe harbour ex § 230 CDA

Il distreto nord della Californa 21 .09.2022 , CASE NO. 22-cv-02366-RS, Shared.com c. Meta, affronta il tema della invocabilità del porto sicuro  ex § 230 CDA nel caso venga azionata responsabilità contrattuale di tipo editoriale del PROVIDER per materiali non propri.

Nel caso ricorreva anche un contratto di pubblicità dell’utente con  Facebook, tipo assai diffuso e  al centro delle vendite digitali odierne.

Fatti: << Shared is a partnership based out of Ontario, Canada that “creates and publishes original,
timely, and entertaining [online] content.” Dkt. 21 ¶ 9. In addition to its own website, Plaintiff also
operated a series of Facebook pages from 2006 to 2020. During this period, Shared avers that its
pages amassed approximately 25 million Facebook followers, helped in part by its substantial
engagement with Facebook’s “advertising ecosystem.” This engagement occurred in two ways.
First, Shared directly purchased “self-serve ads,” which helped drive traffic to Shared.com and
Shared’s Facebook pages.

Second, Shared participated in a monetization program called “Instant
Articles,” in which articles from Shared.com would be embedded into and operate within the
Facebook news feed; Facebook would then embed ads from other businesses into those articles
and give Shared a portion of the ad revenue. Shared “invested heavily in content creation” and
retained personnel and software specifically to help it maximize its impact on the social media
platform.
Id. ¶ 19.
Friction between Shared and Facebook began in 2018. Shared states that it lost access to
Instant Articles on at least three occasions between April and November of that year. Importantly,
Shared received no advance notice that it would lose access. This was contrary to Shared’s averred
understanding of the Facebook Audience Network Terms (“the FAN Terms”), which provide that
“[Facebook] may change, withdraw, or discontinue [access to Instant Articles] in its sole
discretion and [Facebook] will use good faith efforts to provide Publisher with notice of the
same.”
Id. ¶ 22; accord Dkt. 21-5. Shared asserts that “notice,” as provided in the FAN Terms,
obliges Facebook to provide
advance notice of a forthcoming loss of access, rather than after-thefact notice. (…)>.

Facebook (F.) poi sospese l’account e impedì il funzionamento del programma di advertisment

Alla domanda giudiziale, F. (anzi Meta) si difende preliminarmente con il safe harbour, quale decisione editoriale e quindi libera:

LA CORTE: << Defendant is only partially correct. Plaintiff raises three claims involving Defendant’s
decision to suspend Plaintiff’s access to its Facebook accounts and thus “terminate [its] ability to
reach its followers”: one for conversion, one for breach of contract, and one for breach of the
implied covenant of good faith and fair dealing.
See Dkt. 21, ¶¶ 54–63, 110–12, 119. Shared
claims that, contrary to the Facebook Terms of Service, Defendant suspended Shared’s access to
its Facebook pages without first determining whether it had “clearly, seriously or repeatedly
breached [Facebook’s] Terms or Policies>>.

E poi: << At bottom, these claims seek to hold Defendant liable
for its decision to remove third-party content from Facebook. This is a quintessential editorial
decision of the type that is “perforce immune under section 230.”
Barnes, 570 F.3d at 1102
(quoting
Fair Housing Council of San Fernando Valley v. Roommates.com, 521 F.3d 1157, 1170–
71 (9th Cir. 2008) (en banc)). Ninth Circuit courts have reached this conclusion on numerous
occasions.
See, e.g., King v. Facebook, Inc., 572 F. Supp. 3d 776, 795 (N.D. Cal. 2021); Atkinson
v. Facebook Inc.
, 20-cv-05546-RS (N.D. Cal. Dec. 7, 2020); Fed. Agency of News LLC v.
Facebook, Inc.
, 395 F. Supp. 3d 1295, 1306–07 (N.D. Cal. 2019). To the extent Facebook’s Terms
of Service outline a set of criteria for suspending accounts (i.e., when accounts have “clearly,
seriously, or repeatedly” breached Facebook’s policies), this simply restates Meta’s ability to
exercise editorial discretion. Such a restatement does not, thereby, waive Defendant’s section
230(c)(1) immunity.
See King, 572 F. Supp. 3d at 795. Allowing Plaintiff to reframe the harm as
one of lost data, rather than suspended access, would simply authorize a convenient shortcut
through section 230’s robust liability limitations by way of clever pleading. Surely this cannot be
what Congress would have intended. As such, these claims must be dismissed.
>>

In breve, che i materiali di cui si contesta la rimozione siano dell’attore/contraente (anzichè di un utente terzo come nei più frequenti casi di diffamazione), nulla sposta: il safe harbour sempre si applica, ricorrendo i requisiti previsti dal § 230 CDA

La frode informatica tramite app scaricata da Apple Store non preclude ad Apple di fruire del safe harbour ex 230 CDA

Il Trib. del North Dist. dell aCalifornia, 2 settembre 2022, HADONA DIEP, et al., Plaintiffs, v. APPLE, INC., Defendant. , Case No. 21-cv-10063-PJH , decide su una domanda contro Apple per aver favorito/omesso controlli su una app (Toast Plus) del suo store , che le aveva frodato diversa criptocurrency

L’immancabile eccezione di porto sicuro ex § 230 CDA viene accolta.

Ed invero difficile sarebbe stato  un esito diverso, trttandosi di caso da manuale.

Naturalmenten gli attori tentano di dire i) che avevano azionato anche domande  eccedenti il suo ruolo di publisher e ii) che Apple è content provider (<<The act for which plaintiffs seek to hold Apple liable is “allowing the Toast Plus application to be distributed on the App Store,” not the development of the app>>) : ma questo palesement non eccede il ruolo di mero hosting.

Conclusion: Plaintiffs’ allegations all seek to impose liability based on Apple’s role in vetting the app and making it available to consumers through the App Store. Apple qualifies as an interactive computer service provider within the meaning of the first prong of the Barnes test. Plaintiffs seek to hold Apple liable for its role in reviewing and making the Toast Plus app available, activity that satisfies the second prong of the Barnes test as publishing activity. And plaintiffs’ allegations do not establish that Apple created the Toast Plus app; rather, it was created by another information content provider and thus meets the third prong of the Barnes test. For each of these reasons, as well as the inapplicability of an exemption, Apple is immune under § 230 for claims based on the conduct of the Toast Plus developers.

(notizia e link alla sentenza dal blog del prof. Eric Goldman)

Il software di Salesforce , per ottimizzare la gestione di una piattaforma di marketplace, è coperto dal safeharbour ex § 230 CDA

In Backpage.com (piattaforma di compravendite rivale di craiglist) comparivano anche molti annunci a sfondo sessuale.

A seguito di uno di questi, una minorennme (tredicennne all’epoca!) cadeva vittoma di predatori.

Agiva quindi assieme alla madre in giudizio contro Salesforce (poi: S.) per aver collaborato e tratto utile economico dagli incarichi ricevuti da Backpage, relativi alla collaborazione nella gestione online dei contatti con i suoi utenti.

Il distretto nord dell’Illinois, eastern div., Case: 1:20-cv-02335 , G.G. (minor) v. Saleforce.com inc., 16 maggio 2022, accoglie l'(immancabile) eccezione di S. della fruibilità del predetto safeharbour.

Il punto è trattato con buona analisi sub I Section 230, pp. 6-24.

La corte riconocse che S. sia in interactive computer service, , sub A, p. 8 ss: difficilmente contestabile.

Riconosce anche che S. sia chiamato in giudizio come publisher, sub B, p. 13 ss: asserzione, invece, meno scontata.

Chi collabora al fatto dannoso altrui (sia questi un publisher -come probabilmente Backpage- oppure no) è difficile possa essere ritenuto publisher: a meno di dire che lo è in quanto la sua condotta va qualificata col medesimo titolo giuridico gravante sul soggetto con cui collabora (si v. da noi l’annosa questione del titolo di responsabilità per concorrenza sleale ex art. 2598 cc in capo al terzo privo della qualità di imprenditore concorrente).

(notizia e link alla sentenza dal blog del prof. Eric Goldman)

Attualmente il sito web di Backpage.com è sotto sequestro della forza pubblica statunitense. Compare questo:

Ancora sull’applicabilità del safe harbour ex § 230 CDA alla rimozione/sospensione di contenuti dell’attore

In un  caso di lite promossa da dissidente dal governo arabo a causa della mancata protezione del proprio account da hacker governativi arabi e successiva sua sospensione, interviene il distretto nord della california 20.05.2022m, case 3:21-cv-08017-EMC, Al-Hamed c.- Twitter e altri. 

Le domanda proposte erano molte: qui ricordo solo la difesa di Tw. basata sull’esimente in oggetto.

La corte concede a Twitter il safe harbour ex § 230.c.1,  ricorrendone i tre requisiti:

– che si tratti di internet service provider,

– che si qualifichi il convenuto come publisher/speaker,

– che riguardi contemnuti non di Tw. ma di terzi .

E’ quest’ultimo il punto meno chiaro (di solito la rimozione/sospensione riguarda materiale offensivo contro l’attore e caricato da terzi)  : ma la corte chiarisce che la sospensione di contenuti del ricorrente è per definizione sospensione di conteuti non di Tw e quindi di terzi (rispetto al solo  Tw. , allora, non certo rispetto all’attore).

Ricorda però che sono state emesse opinioni diverse: <<Some courts in other districts have declined to extend Section 230(c)(1) to cases in which the user brought claims based on their own content, not a third party’s, on the ground that it would render the good faith requirement of Section 230(c)(2) superfluous. See, e.g., e-ventures Worldwide, LLC v. Google, Inc., No. 2:14-cv-646-FtM-PAM-CM, 2017 WL 2210029, at *3 (M.D. Fl. Feb. 8, 2017). However, although a Florida court found the lack of this distinction to be problematic, it also noted that other courts, including those in this district, “have found that CDA immunity attaches when the content involved was created by the plaintiff.” Id. (citing Sikhs for Just., Inc. v. Facebook, Inc., 697 F. App’x 526 (9th Cir. 2017) (affirming dismissal of the plaintiff’s claims based on Facebook blocking its page without an explanation under Section  230(c)(1)) >> (e altri casi indicati).

Si tratta del passaggio più interessante sul tema.

(notizia e link alla sentenza dal blog del prof. Eric Goldman).

Chi ritweetta un post lesivo è coperto dal safe harbour ex § 230 CDA? Pare di si

La Corte Suprema del New Hampshire, opinion 11.05.2022, Hillsborough-northern judicial district No. 2020-0496 , Banaian c. Bascom et aa., affronta il tema e risponde positivamente.

In una scuola situata a nord di Boston, uno studente aveva hackerato il sito della scuola e aveva inserito post offensivi, suggerenti che una docente fosse  “sexually pe[r]verted and desirous of seeking sexual liaisons with Merrimack Valley students and their parents.”

Altro studente tweetta il post e altri poi ritweettano (“ritwittano”, secondo Treccani) il primo tweet.

La docente agisce verso i retweeters , i quali però eccepiscono il safe harbour ex § 230.c)  CDA.  Disposizione che così recita:

<<c) Protection for “Good Samaritan” blocking and screening of offensive material.

(1) Treatment of publisher or speaker

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.>>.

La questione giuridica è se nel concetto di user rientrino gli alunni del caso  sub iudice.

La SC conferma che è così. Del resto sarebbe assai difficile ragionare diversamente.

Precisamente: << We are persuaded by the reasoning set forth in these cases. The plaintiff identifies no case law that supports a contrary result. Rather, the plaintiff argues that because the text of the statute is ambiguous, the title of section 230(c) — “Protection for ‘Good Samaritan’ blocking and screening of offensive material” — should be used to resolve the ambiguity. We disagree, however, that the term “user” in the text of section 230 is ambiguous. See Webster’s Third New International Dictionary 2524 (unabridged ed. 2002) (defining “user” to mean “one that uses”); American Heritage Dictionary of the English Language 1908 (5th ed. 2011) (defining “user” to mean “[o]ne who uses a computer, computer program, or online service”). “[H]eadings and titles are not meant to take the place of the detailed provisions of the text”; hence, “the wise rule that the title of a statute and the heading of a section cannot limit the plain meaning of the text.” Brotherhood of R.R. Trainmen v. Baltimore & O.R. Co., 331 U.S. 519, 528-29 (1947). Likewise, to the extent the plaintiff asserts that the legislative history of section 230 compels the conclusion that Congress did not intend “users” to refer to individual users, we do not consider legislative history to construe a statute which is clear on its face. See Adkins v. Silverman, 899 F.3d 395, 403 (5th Cir. 2018) (explaining that “where a statute’s text is clear, courts should not resort to legislative history”).

Despite the plaintiff’s assertion to the contrary, we conclude that it is evident that section 230 of the CDA abrogates the common law of defamation as applied to individual users. The CDA provides that “[n]o cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.” 47 U.S.C. § 230(e)(3). We agree with the trial court that the statute’s plain language confers immunity from suit upon users and that “Congress chose to immunize all users who repost[] the content of others.” That individual users are immunized from claims of defamation for retweeting content that they did not create is evident from the statutory language. See Zeran v. America Online, Inc., 129 F.3d 327, 334 (4th Cir. 1997) (explaining that the language of section 230 makes “plain that Congress’ desire to promote unfettered speech on the Internet must supersede conflicting common law causes of action”).
We hold that the retweeter defendants are “user[s] of an interactive computer service” under section 230(c)(1) of the CDA, and thus the plaintiff’s claims against them are barred. See 47 U.S.C. § 230(e)(3). Accordingly, we  uphold the trial court’s granting of the motions to dismiss because the factspled in the plaintiff’s complaint do not constitute a basis for legal relief.
>>

(notizia della e link alla sentenza dal blog del prof. Eric Goldman)

Il blocco dell’account Twitter per post ingannevoli o fuorvianti (misleading) è coperto dal safe harbour ex § 230 CDA

Il distretto nord della California con provv. 29.04.2022, No. C 21-09818 WHA, Berenson v. Twitter, decide la domanda giudiziale allegante un illegittimo blocco dell’account per post fuorvianti (misleading) dopo la nuova Twitter policy five-strike in tema di covid 19.

E la rigetta, riconoscendo il safe harbour ex § 230.c.2.a del CDA.

A nulla valgono le allegazioni attoree intorno alla mancanza di buona fede in Twitter: << With the exception of the claims for breach of contract and promissory estoppel, all claims in this action are barred by 47 U.S.C. Section 230(c)(2)(A), which provides, “No provider or user of an interactive computer service shall be held liable on account of — any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” For an internet platform like Twitter, Section 230 precludes liability for removing content and preventing content from being posted that the platform finds would cause its users harm, such as misinformation regarding COVID-19. Plaintiff’s allegations regarding the leadup to his account suspension do not provide a sufficient factual underpinning for his conclusion Twitter lacked good faith. Twitter constructed a robust five-strike COVID-19 misinformation policy and, even if it applied those strikes in error, that alone would not show bad faith. Rather, the allegations are consistent with Twitter’s good faith effort to respond to clearly objectionable content posted by users on its platform. See Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1105 (9th Cir. 2009); Domen v. Vimeo, Inc., 433 F. Supp. 3d 592, 604 (S.D.N.Y. 2020) (Judge Stewart D. Aaron)>>.

Invece non  rientrano nella citata esimente (quindi la causa prosegue su quelle) le domande basate su violazione contrattuale e promissory estoppel.

La domanda basata sulla vioalzione del diritto di parola è pure respinta per il solito motivo della mancanza di state action, essendo Tw. un  ente privato: <<Aside from Section 230, plaintiff fails to even state a First Amendment claim. The free speech clause only prohibits government abridgement of speech — plaintiff concedes Twitter is a private company (Compl. ¶15). Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1928 (2019). Twitter’s actions here, moreover, do not constitute state action under the joint action test because the combination of (1) the shift in Twitter’s enforcement position, and (2) general cajoling from various federal officials regarding misinformation on social media platforms do not plausibly assert Twitter conspired or was otherwise a willful participant in government action. See Heineke v. Santa Clara Univ., 965 F.3d 1009, 1014 (9th Cir. 2020).  For the same reasons, plaintiff has not alleged state action under the governmental nexus test either, which is generally subsumed by the joint action test. Naoko Ohno v. Yuko Yasuma, 723 F.3d 984, 995 n.13 (9th Cir. 2013). Twitter “may be a paradigmatic public square on the Internet, but it is not transformed into a state actor solely by providing a forum for speech.” Prager Univ. v. Google LLC, 951 F.3d 991, 997 (9th Cir. 2020) (cleaned up, quotation omitted). >>

(notizia e link alla sentenza dal blog del prof. Eric goldman)

Ritwittare aggiungendo commenti diffamatori non è protetto dal safe harbour ex 230 CDA

Byrne è citato per diffamazione da Us Dominion (azienda usa che fornisce software per la gestione dei processi elettorali) per dichiaraizoni e tweet offensivi.

Egli cerca l’esimente del safe harbour ex 230 CDA ma gli va male: è infatti content provider.

Il mero twittare un link (a materiale diffamatorio) pootrebbe esserne coperto: ma non i commenti accompagnatori.

Così il Trib. del District of Columbia 20.04.4022, Case 1:21-cv-02131-CJN, US Dominion v. Byrne: <<A so-called “information content provider” does not enjoy immunity under § 230.   Klayman v. Zuckerberg, 753 F.3d 1354, 1356 (D.C. Cir. 2014). Any “person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service” qualifies as an “information content provider.” 47 U.S.C. § 230(f)(3); Bennett, 882 F.3d at 1166 (noting a dividing line between service and content in that ‘interactive computer service’ providers—which are generally eligible for CDA section 230 immunity—and ‘information content provider[s],’ which are not entitled to immunity”).
While § 230 may provide immunity for someone who merely shares a link on Twitter,
Roca Labs, Inc. v. Consumer Opinion Corp., 140 F. Supp. 3d 1311, 1321 (M.D. Fla. 2015), it does not immunize someone for making additional remarks that are allegedly defamatory, see La Liberte v. Reid, 966 F.3d 79, 89 (2d Cir. 2020). Here, Byrne stated that he “vouch[ed] for” the evidence proving that Dominion had a connection to China. See
Compl. ¶ 153(m). Byrne’s alleged statements accompanying the retweet therefore fall outside the ambit of § 230 immunity>>.

Questione non difficile: che il mero caricamente di un link sia protetto, è questione interessante; che invece i commenti accompagnatori ingiuriosi rendano l’autore un content provider, è certo.