Il servizio (filtrato?) di Gmail può fruire del safe harbour ex § 230 CDA

Eric Goldman riferisce (e dà link al testo) di US District court-East. Distr. of California 24 agosto 2023, No. 2:22-cv-01904-DJC-JBP, Republican National Committee v. Google.

Il gruppo politico di destra accusa Google (G.) di filtraggi illegittimi delle sue mail.

G. si difende con successo eccependo il safe harbour ex 230.c.2.a (No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;)

Il punto critico è l’accertamento dei requisiti di objectionable  e della buona fede.

Come osserva Eric Goldman, interessante è pure la considerazione di policy della corte:

<<Section 230 also addresses Congress’s concern with the growth of unsolicited commercial electronic mail, and the fact that the volume of such mail can make email in general less usable as articulated in the CAN-SPAM Act. See 15 U.S.C. § 7701(a)(4), (6).   Permitting suits to go forward against a service provider  based on the over-filtering of mass marketing emails would discourage providers from offering spam filters or significantly decrease the number of emails segregated. It would also place courts in the business of micromanaging content providers’ filtering systems in contravention of Congress’s directive that it be the provider or user that determines what is objectionable (subject to a provider acting in bad faith). See 47 U.S.C. § 230(c)(2)(A) (providing no civil liability for “any action voluntarily taken in good faith to restrict access to . . . material that the provide or user considers to be . . . objectionable” (emphasis added)). This concern is exemplified by the fact that the study on which the RNC relies to show bad faith states that each of the three email systems had some sort of right- or left- leaning bias. (ECF No. 30-10 at 9 (“all [spam filtering algorithms] exhibited political biases in the months leading up to the 2020 US elections”).) While Google’s bias was greater than that of Yahoo or Outlook, the RNC offers no limiting principle as to how much “bias” is permissible, if any. Moreover, the study authors note that reducing the filters’ political biases “is not an easy problem to solve. Attempts to reduce the biases of [spam filtering algorithms] may inadvertently affect their efficacy.” (Id.) This is precisely the impact Congress desired to avoid in enacting the Communications Decency Act, and reinforces the conclusion that section 230 bars this suit>>.

Superare il safe harbour ex § 230 CDA di FAcebook allegando che il suo algoritmo ha contribuito a raicalizzare l’assassino

Il prof. Eric Goldman ricorda una sentenza del Distretto Sud California-Charleston 24 luglio 2023 che rigetta per safe harbour una domanda di danni verso Meta proposta dai parenti di una vittima dell’eccidio compiuto da Dylan Roof nel 2015 alla chiesa di Charleston.

Purtroppo non c’è link al testo ma c’è quello alla citazione introttiva. Nella quale è ben argomentata la ragione del superamento della posizione passiva di FAcebook.

Può essere utile anche da noi ove però superare la specificità della prevedibilità da parte della piattaforma non è facile (ma come colpa con previsione forse si)

Il 230 CDA salva Amazon dall’accusa di corresponsabile di recensioni diffamatorie contro un venditore del suo marketplace

La recensione diffamatoria (lievemente, per vero: sciarpa Burberry’s asseritamenye non autentica) non può vedere Amazon correposanbilòe perchè oepra il safe harbour citato.

Si tatta di infatti proprio del ruolo di publisher/speaker previsto dala legge. Nè può ravvisarsi un contributo attivo di Amazon  nell’aver stabilito le regole della sua piattaforma, come vorrebbe il diffamato: il noto caso Roommates è malamente invocato.

Caso alquanto facile.

Così l‘appello del 11 circuito 12 giugn 2023, No. 22-11725,  MxCall+1 c. Zotos + Amazon:

<<In that case, Roommates.com published a profile page for each subscriber seeking housing on its website. See id. at 1165. Each profile had drop-down menu on which subscribers seeking housing had to specify whether there are currently straight males, gay males, straight females, or lesbians living at the dwelling. This information was then displayed on the website, and Room-mates.com used this information to channel subscribers away from the listings that were not compatible with the subscriber’s prefer-ences. See id. The Ninth Circuit determined that Roommates.com was an information content provider (along with the subscribers seeking housing on the website) because it helped develop the in-formation at least in part. Id. (“By requiring subscribers to provide the information as a condition of accessing its service, and by providing a limited set of prepopulated answers, Room-mate[s.com] . . . becomes the developer, at least in part, of that in-formation.”).
Roommates.com is not applicable, as the complaint here al-leges that Ms. Zotos wrote the review in its entirety. See generally D.E. 1. Amazon did not create or develop the defamatory review even in part—unlike Roommates.com, which curated the allegedly discriminatory dropdown options and required the subscribers to choose one. There are no allegations that suggest Amazon helped develop the allegedly defamatory review.
The plaintiffs seek to hold Amazon liable for failing to take down Ms. Zotos’ review, which is exactly the kind of claim that is immunized by the CDA—one that treats Amazon as the publisher of that information. See 47 U.S.C. § 230(c)(1). See also D.E. 1 at 5 (“Amazon . . . refused to remove the libelous statements posted by Defendant Zotos”). “Lawsuits seeking to hold a service provider [like Amazon] liable for its exercise of a publisher’s traditional edi-torial functions—such as deciding whether to publish, withdraw, postpone, or alter content—are barred.” Zeran, 129 F.3d at 330. We therefore affirm the dismissal of the claims against Amazon>>.

(notizia e link dal sito del prof. Eric Goldman)

Lo studente che lascia diffamare i docenti dando le credenziali del social a suoi amici, autori dei post, non è protetto dal safe harbor ex 230 CDA

L’appello del 6° circuito, n° 22-1748, JASON KUTCHINSKI c. FREELAND COMMUNITY SCHOOL DISTRICT; MATTHEW A. CAIRY and TRACI L. SMITH , decide una lite promossa dall’alunno impugnante la sanzione disciplinare irrogatagli per aver dato le credenziali Instagram ad amici , autori di post diffamatori di docenti della scuola.

L’alunno non è infatti qualificabile come publisher o spealker, essendo invece coautore della condotta dannosa:

<<Like the First, Fourth, and Ninth Circuits, we hold that when a student causes, contributes to, or affirmatively participates in harmful speech, the student bears responsibility for the harmful speech. And because H.K. contributed to the harmful speech by creating the Instagram account, granting K.L. and L.F. access to the account, joking with K.L. and L.F. about their posts, and accepting followers, he bears responsibility for the speech related to the Instagram account.
Kutchinski disagrees and makes two arguments. First, Kutchinski argues that Section 230 of the Communications Decency Act, 47 U.S.C. § 230, bars Defendants from disciplining H.K. for the posts made by K.L. and L.F.     This is incorrect. Under § 230(c)(1), “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” To the extent § 230 applies, we do not treat H.K. as the “publisher or speaker” of the posts made by K.L. and L.F. Instead, we have found that H.K. contributed to the harmful speech through his own actions>>.

Che poi aggiunge:

<<Second, Kutchinski argues that disciplining H.K. for the posts emanating from the Instagram account violates H.K.’s First Amendment freedom-of-association rights. “The First Amendment . . . restricts the ability of the State to impose liability on an individual solely because of his association with another.” NAACP v. Claiborne Hardware Co., 458 U.S. 886, 918–19 (1982). “The right to associate does not lose all constitutional protection merely because some members of the group may have participated in conduct or advocated doctrine that itself is not protected.” Id. at 908. But Defendants did not discipline H.K. because he associated with K.L. and L.F. They determined that H.K. jointly participated in the wrongful behavior. Thus, Defendants did not impinge on H.K.’s freedom-of-association rights>>.

(notizia e link alla sentenza dal blog del prof. Eric Goldman)

Il motore di ricerca è corresponsabile per associazioni indesiderate ma errate in caso di omonimia?

La risposta è negativa nel diritto USA, dato che Microsoft è coperta dal safe harbour ex § 230 CDA:

Così , confermando il 1° grado, la 1st District court of appeal della Florida, Nos. 1D21-3629 + 1D22-1321 (Consolidated for disposition) del 10 maggio 2023, White c. DISCOVERY COMMUNICATIONS, ed altri.

fatto:

Mr. White sued various nonresident defendants for damages in tort resulting from an episode of a reality/crime television show entitled “Evil Lives Here.” Mr. White alleged that beginning with the first broadcast of the episode “I Invited Him In” in August 2018, he was injured by the broadcasting of the episode about a serial killer in New York also named Nathaniel White. According to the allegations in the amended complaint, the defamatory episode used Mr. White’s photograph from a decades-old incarceration by the Florida Department of Corrections. Mr. White alleged that this misuse of his photo during the program gave viewers the impression that he and the New York serial killer with the same name were the same person thereby damaging Mr. White.

Diritto :

The persons who posted the information on the eight URLs provided by Mr. White were the “information content providers” and Microsoft was the “interactive service provider” as defined by 47 U.S.C. § 230(f)(2) and (3). See Marshall’s Locksmith Serv. Inc. v. Google, LLC, 925 F.3d 1263, 1268 (D.C. Cir. 2019) (noting that a search engine falls within the definition of interactive computer service); see also In re Facebook, Inc., 625 S.W. 3d 80, 90 (Tex. 2021) (internal citations omitted) (“The ‘national consensus’ . . . is that ‘all claims’ against internet companies ‘stemming from their publication of information created by third parties’ effectively treat the defendants as publishers and are barred.”). “By presenting Internet search results to users in a relevant manner, Google, Yahoo, and Microsoft facilitate the operations of every website on the internet. The CDA was enacted precisely to prevent these types of interactions from creating civil liability for the Providers.” Baldino’s Lock & Key Serv., Inc. v. Google LLC, 285 F. Supp. 3d 276, 283 (D.D.C. 2018), aff’d sub nom. Marshall’s Locksmith Serv., 925 F.3d at 1265.
In Dowbenko v. Google Inc., 582 Fed. App’x 801, 805 (11th Cir. 2014), the state law defamation claim was “properly dismissed” as “preempted under § 230(c)(1)” since Google, like Microsoft here, merely hosted the content created by other providers through search services. Here, as to Microsoft’s search engine service, the trial court was correct to grant summary judgment finding Microsoft immune from Mr. White’s defamation claim by operation of Section 230 since Microsoft did not publish any defamatory statement.
Mr. White argues that even if Microsoft is immune for any defamation occurring by way of its internet search engine, Microsoft is still liable as a service that streamed the subject episode. Mr. White points to the two letters from Microsoft in support of his argument. For two reasons, we do not reach whether an internet streaming service is an “interactive service provider” immunized from suit for defamation by Section 230.
First, the trial court could not consider the letters in opposition to the motion for summary judgment. The letters were not referenced in Mr. White’s written response to Microsoft’s motion. They were only in the record in response to a different defendant’s motion for a protective order. So the trial court could disregard the letters in ruling on Microsoft’s motion. See Fla. R. Civ. P. 1.510(c)(5); Lloyd S. Meisels, P.A. v. Dobrofsky, 341 So. 3d 1131, 1136 (Fla. 4th DCA 2022). Without the two letters, Mr. White has no argument that Microsoft was a publisher of the episode.
Second, even considering the two letters referenced by Mr. White, they do not show that Microsoft acted as anything but an interactive computer service. That the subject episode was possibly accessible for streaming via a Microsoft search platform does not mean that Microsoft participated in streaming or publishing the episode

(notizia e link alla sentenza dal blog del prof. Eric Goldman)

Recensione diffamante, a carico di avvocato, postata su Google Maps e responsabilità di quest’ultimo: opera il safe harbor ex § 230 CDA?

La risposta è positiva, naturalmente.

Si tratta di avvocato operante nelle vicinenza di Portland e diffamato da pesante recensione postata su Google maps.

Caso facile, allora,  per il Tribunale dell’Oregon Daniloff c. Google+1, 30 gennaio 2023, Case No. 3:22-cv-01271-IM .

Il prof. Eric Goldman dà pure il link alla recensione diffamante .

L’0aavvopcato attopre aveva CHJIESTO danni per 300.000 dollari a google e al recensore.

<<In evaluating Defendant Google’s immunity under the CDA, this Court applies the threefactor Ninth Circuit test. See Kimzey, 836 F.3d at 1268. First, to determine whether Defendant Google qualifies as an interactive computer service provider, this Court notes that Google is an
operator who passively provides website access to multiple users. Fair Hous. Council of San
Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157, 1162 (9th Cir. 2008) (en banc) (“A
website operator . . . [who] passively displays content that is created entirely by third parties . . .
is only a service provider with respect to that content.”). Accordingly, as Defendant Google
argues and Plaintiff concedes, Google qualifies as an interactive computer service provider. ECF
8 at 5; ECF 9 at 3; see also 47 U.S.C. § 230(f)(3); Lewis v. Google LLC, 461 F. Supp. 3d 938,
954 (N.D. Cal. 2020) (collecting cases), aff’d, 851 F. App’x 723 (9th Cir. 2021); Gaston v.
Facebook, Inc., No. 3:12-CV-0063-ST, 2012 WL 629868, at *7 (D. Or. Feb. 2, 2012), report and
recommendation adopted, No. 3:12-CV-00063-ST, 2012 WL 610005 (D. Or. Feb. 24, 2012).
Second, because Plaintiff premises his defamation claim on Defendant Google’s
publication of Defendant Keown’s review, ECF 1-1, Ex. A, at ¶ 22, this Court finds that Plaintiff
seeks to treat Google as a publisher or speaker. See Kimzey, 836 F.3d at 1268 (holding that
defamation claim based on Yelp review was “directed against Yelp in its capacity as a publisher
or speaker” (citing Barnes, 570 F.3d at 1102)).
Third, as the allegedly defamatory review was posted by Defendant Keown, ECF 1-1, Ex.
A, at ¶ 5–7, this Court finds the relevant information was provided by another information
content provider. Rather than allege that Defendant Google created the review, Plaintiff alleges
that Defendant Google “hosted” it via Plaintiff’s Google Business profile, id. at ¶ 30, thereby
“material[ly] contribut[ing]” to the defamatory review. ECF 9 at 3. An entity who “contributes
materially to the alleged illegality of the conduct” at issue is not entitled to protection under
Section 230. Roommates.com, 521 F.3d at 1168.
The Ninth Circuit addressed a similar argument in Kimzey, a case arising out of a
negative review on Yelp’s website. Kimzey, 836 F.3d at 1265. While the plaintiff in that case
claimed that Yelp had “authored” the review at issue through its star-rating system, id. at 1268,
the Ninth Circuit found that “Yelp’s rating system . . . is based on rating inputs from third parties
and . . . [is] user-generated data,” id. at 1270. As such, the Ninth Circuit held that Yelp’s actions
did not qualify as “creation” or “development” of information and that “the rating system [did]
‘absolutely nothing to enhance the defamatory sting of the message’ beyond the words offered
by the user.” Id. at 1270–71 (quoting Roommates.com, 521 F.3d at 1172).
Defendant Keown’s review similarly qualifies as user-generated data and Defendant
Google’s hosting of that review through its Google Business profile system does not qualify as a
material contribution. This Court finds that Plaintiff bases his defamation claim on a review
provided by an information content provider other than Defendant Google—thus fulfilling the
third factor required under Kimzey. See also id. at 1265 (observing that a claim “asserting that
[an interactive computer service provider is] liable in its well-known capacity as the passive host
of a forum for user reviews [is] a claim without any hope under [Ninth Circuit] precedent[]”).
Accordingly, Plaintiff’s defamation claim against Defendant Google satisfies the Ninth Circuit’s
three-factor test and Defendant Google is immune under Section 230 of the CDA.
To the extent that Plaintiff relies on Defendant Google’s refusal to remove Defendant Keown’s review in pursuing his defamation claim, ECF 1-1 at ¶ 11–17; ECF 9 at 4, this Court also holds that Defendant Google is immunized under the CDA for this decision. Roommates.com, 521 F.3d at 1170–71 (“[A]ny activity that can be boiled down to deciding whether to exclude material that third parties seek to post online is perforce immune under Case 3:22-cv-01271-IM Document 11 Filed 01/30/23 Page 6 of 7PAGE 7 – OPINION AND ORDER section 230.”); see also Barnes, 570 F.3d at 1105. Accordingly, Defendant Google’s Motion to Dismiss, ECF 8, is GRANTED.>>

Responsabilità del Registrar di domain names per l’uso illecito del dominio da parte del nuovo assegnatario? Si applica il safe harbour ex § 230 CDA?

L’appello del 9° circuito 3 febbrio 2023, No. 21-16182, Scotts Rigby v. Godaddy, sull’uso indebito del nome di dominio “scottrigsbyfoundation.org;” dato a un terzo e divenuto sito di giochi d’azzardo.

dal Summary iniziale:

<<When Rigsby and the Foundation failed to pay
GoDaddy, a domain name registrar, the renewal fee for
scottrigsbyfoundation.org, a third party registered the thenavailable domain name and used it for a gambling
information site. (…)
The panel held that Rigsby could not satisfy the “use in
commerce” requirement of the Lanham Act vis-à-vis
GoDaddy because the “use” in question was being carried
out by a third-party gambling site, not GoDaddy, and Rigsby
therefore did not state a claim under 15 U.S.C. § 1125(a). As
to the Lanham Act claim, the panel further held that Rigsby
could not overcome GoDaddy’s immunity under the
Anticybersquatting Consumer Protection Act, which limits
the secondary liability of domain name registrars and
registries for the act of registering a domain name. The
panel concluded that Rigsby did not plausibly allege that
GoDaddy registered, used, or trafficked in his domain name
with a bad faith intent to profit, nor did he plausibly allege
that GoDaddy’s alleged wrongful conduct surpassed mere
registration activity>>

E sorpttutto sul § 230 CDA , che protegge da molte domande:

<<The panel held that § 230 of the Communications
Decency Act, which immunizes providers of interactive
computer services against liability arising from content
created by third parties, shielded GoDaddy from liability for
Rigsby’s state-law claims for invasion of privacy, publicity,
trade libel, libel, and violations of Arizona’s Consumer
Fraud Act.

The panel held that immunity under § 230
applies when the provider is an interactive computer
services, the plaintiff is treating the entity as the publisher or
speaker, and the information is provided by another
information content provider.

Agreeing with other circuits,
the panel held that domain name registrars and website
hosting companies like GoDaddy fall under the definition of
an interactive computer service.

In addition, GoDaddy was
not a publisher of scottrigsbyfoundation.org, and it was not
acting as an information content provider.>>

Resposting di fotografie e aggiunta di commento asseritamente ingiurioso è coperto da safe harbour ex § 230 CDA?

Dice di si l’appello calforniano 1st appellate district – division one, 15 dicembre 2022, A165836, A165841, A.H e altri c. Labana.

A seguito della morte di George Floyd e del reperimento della foto su internet di alcuni alunni (della stessa scuola del figlio) col volto dipinto di nero (con significato razzialmnente derisorio), una mamma di colore organizza con altra mamma una marcia di protesta.

Crea allo scopo un “evento Facebook” che include la foto medesima (senza nomi; ma erano stati da altri identificati). Vi aggiunge il commento “This is a protest to [sic] the outrageous behavior that current and former students from SFHS did–A George Floyd [I]nstagram account making fun of his death, the fact that he could not breath [sic] and kids participating in black face and thinking that this is all a joke.

Does the SFHS administration think this is a joke? Please join us at the entrance of the school off of Miramonte St. and make sure this administration knows that this type of behavior will NOT be tolerated.

Please remember to practice social distancing, wear a mask and bring a sign if you would like! Feel free to add people to this list”.

Gli alunni rappresentati nella foto agiscono per difamazione anche verso questa mamma .

Il giudice di primo e secondo grado però confermano che opera il § 230 CDA come safe harbour (come internet service user, direi , non provider) dato che era stato accertato che la mamma no era autrice della foto stessa, trattandosi solo di reposting (condivisione).

(notizia e link alla sentenza dal blog del prof. Eric Goldman)

Safe harbour ex § 230 CDA per piattaforma che verifica malamente l’identità dell’utente, poi autore dell’illecito?

Dice di si la corte del District of Colorado, Case 1:22-cv-00899-MEH , del 5 dicembre 2022, Roland ed altri c. Letgo+2.

Si tratta di azione contro una piattaforma di scambi di annunci di compravendita (Letgo), basata sul fatto che  la stessa non aveva controllato l’identità di un venditore: il quale aveva postato un annuncio fasullo per poi rapinare il potenziale acquirente (incontro poi conclusosi tragicamente per quet’ultimo).

Circa il § 230.c.1 del  CDA, la piattaforma è di certo internet  provider.

Che sia trattata come publisher o speaker è altretanto certo, anche se la corte si dilunga un pò di più.

Il punto difficile è se fosse o meno un <content privider>, dato che in linea di principio l’annuncio era del suo utente.

Per la corte la piattaforma era in parte anche contentt priovider.

Bisogna allora capire il fatto: centrale è l’attività di verifica dell’utente/venditore.

< Letgo provides a website and mobile application allowing users to “buy from, sell to and
chat with others locally.” Amended Complaint (“Am. Compl.”) at ¶ 21. It advertises a “verified
user” feature. Id. ¶ 23. On its website, Letgo explains that it utilizes “machine learning” to identify
and block inappropriate content (such as stolen merchandise) and continues to work closely with
local law enforcement to ensure the “trust and safety of the tens of millions of people who use
Letgo.”
Id. OfferUp merged with Letgo on or around August 31, 2020. Id. ¶ 26.
To access its “marketplace,” Letgo requires that its consumers create a Letgo account.
Id.
¶ 27. Each new account must provide a name (the truthfulness of which Letgo does not verify) and
an active email address.
Id. Once a new Letgo account is created, the user is given an individual
“user profile.”
Id. at ¶ 28. Each new user is then given an opportunity to and is encouraged to
“verify” their “user profile.”
Id. Once a user is “verified,” the term “VERIFIED WITH” appears
on their profile (in this case, Brown verified with a functioning email address).
Id. ¶¶ 29-30.1 Letgo
performs no background check or other verification process.
Id. Once created, a Letgo user’s
account profile is viewable and accessible to any other Letgo user.
Id. ¶ 31. Letgo buyers and
sellers are then encouraged to connect with other users solely through Letgo’s app.
Id. Letgo’s
advertising and marketing policies prohibit selling stolen merchandise.
Id. 32. Furthermore, the
app promotes its “anti-fraud technology” to help detect signs of possible scams based on keyword
usage.
Id. >

ciò è dufficiente per ritenerlo cotnent priovioder e negarre alla piattaforma il safe harour.

< The singular item of information relevant here is the “verified” designation, and factually,
it appears to be a product of input from both Letgo and its users. It seems from the record that
simply providing a telephone number to Letgo is not sufficient to earn the “verified” designation.
At oral argument, Defendants acknowledged that when someone wants to create an account, he
must provide, in this case, a functioning telephone number, whereupon Letgo sends a
communication to that telephone number (an SMS text) to confirm that it really exists, then informs
users that the person offering something for sale has gone through at least some modicum of
verification. Thus, the argument can be made that Plaintiffs’ claims do not rely solely on thirdparty content.
Defendants say Letgo merely created a forum for users to develop and exchange their own
information, and the “verified” designation, relying solely on the existence of a working email
address or telephone number, did not transform Letgo into a content provider. Mot. at 14. “If [a
website] passively displays content that is created entirely by third parties, then it is only a service
provider with respect to that content. But as to content that it creates itself . . . the website is also
a content provider.”
Roommates.Com, LLC, 521 F.3d at 1162. I do not find in the existing caselaw
any easy answer. (….) In the final analysis under the CDA, I find under Accusearch Inc. that Plaintiffs have
sufficiently pleaded, for a motion under Rule 12(b)(6), that Defendants contributed in part to the
allegedly offending “verified” representation. Therefore, as this stage in the case, Defendants are
not entitled to immunity under the statute. Whether this claim could withstand a motion for
summary judgment, of course, is not before me 
>

Nonostante neghi il safe harbour alla piattaforma, accoglie però le difese di questa rigettando la domanda.

Analoga disposizione non esiste nel diritto UE e ciò anche dopo il Digital Services Act (Reg. UE 2022/2065 del 19 ottobre 2022 relativo a un mercato unico dei servizi digitali e che modifica la direttiva 2000/31/CE (regolamento sui servizi digitali))

(notizia e link alla sentenza dal blog del prof Eric Goldman)

Ancora sul safe harbour ex § 230 CDA: ma questa volta addirittura nella lite Prager University v. Google

Arriva di fronte ad una corte statale la lite Prager University c. Google: Corte di appello dello stato di california, 6th appellate district, 5 diembre 2022, H047714).

Passate decisioni nella stessa lite acquisirono notorietà per essere diventate preecedenti invocati in numerose sentenze successive.

Prager Univ. fa parte del movimento MAGA (Make America Great Again) e pare diffonda disinformazione, che google censura.

La censura però è sia contrattualmente rpevista che non sindacabile dal §230 CDa (e’ il primo aspetto quello più interessante).

Solo due passaggi significativi riporto:

<<Prager’s contention that defendants are themselves an information content provider—in that they developed algorithms used in determining whether to restrict access to Prager’s videos—does nothing to defeat section 230 immunity. Prager pleads no facts from which defendants’ use of algorithms would render them providers of information content. What Prager alleges is the use of “an automated filtering algorithm that examines certain ‘signals’ like the video’s metadata, title, and the language used in the video. The algorithm looks for certain ‘signals’ to determine if rules or criteria are violated so as to warrant segregation in Restricted Mode.” To the extent that an automated filtering algorithm is itself information, defendants of course created it; what is also apparent from Prager’s pleaded facts, however, is that defendants have not “provided [it] through the Internet or any other interactive computer service” within the meaning of section 230(f)(3), to Prager or anyone else…

Prager cites no authority for the proposition that algorithmic restriction of user content—squarely within the letter and spirit of section 230’s promotion of content moderation—should be subject to liability from which the algorithmic promotion of content inciting violence has been held immune…

Prager’s claims turn not on the creation of algorithms, but on the defendants’ curation of Prager’s information content irrespective of the means employed: it is not the algorithm but Prager’s content which defendants publish (or depublish). To the extent Prager’s claims principally rest on allegations that defendants violated a duty under state law to exercise their editorial control in a particular manner, defendants are immune under section 230 from the claims Prager brings in this suit>>.

E poi:

<<The Murphy court, and others, have held that the CDA foreclosed liability where plaintiffs have identified no enforceable promise allegedly breached…Prager’s contractual theories are barred because they are irreconcilable with the express terms of the integrated agreements….

the written contracts governing Prager’s relationship with defendants—limited to YouTube’s Terms of Service (YouTube TOS) and Google LLC’s AdSense Terms of Service (AdSense TOS), which the trial court judicially noticed without objection— contain no provision purporting to constrain defendants’ conduct as publishers…

Though consistent with Prager’s assertion that YouTube makes public-facing representations giving the impression that it voluntarily filters the content on its platform using a discrete set of neutral policies, the Community Guidelines in no way purport to bind defendants to publish any given video, or to remove a video only for violation of those guidelines….

As with the Community Guidelines, Prager conflates user guidelines with provider duties. Prager does not explain how defendants’ illustration in the guidelines of unsuitable content that “will result in a ‘limited or no ads’ monetization state” confers on users a contractual right that all other user content be monetized. At most, the Advertiser-friendly content guidelines permit users to “request human review of [monetization] decisions made by [defendants’] automated systems.” Thus, neither the Community Guidelines nor the Advertiser-friendly guidelines conflict with or limit defendants’ express reservation of rights….

the CDA may permit a state law claim concerning publishing activity based on a specific contractual promise, section 230 notwithstanding; this does not mean that the CDA requires an express contractual reservation of publishing discretion as condition precedent to section 230 immunity from state law claims>>

(notizia della sentenza e link alla stessa dal blog del prof. Eric Goldman)