Ancora sulla (al momento impossibile da ottenere) qualificazione delle piattaforme social come State Actors ai fini del Primo Emendamento (libertà di parola)

Altra sentenza (d’appello stavolta) che rigetta la domadna vs. Facebook (rectius, Meta) basata sul fatto che illegalmente filtrerebbe/censurerebbe i post o rimuoverebbe gli account , violando il Primo Emendamento (libertà di parola).

Questo diritto spetta solo verso lo Stato o verso chi agisce in suo nome o assieme ad esso.

Si tratta della sentenza di appello del 9° circuito (su impugnazione di una sentenza californiana confermata) ,  emessa il 22.11.2021, No. 20-17489 , D.C. No. 3:20-cv-05546-RS, Atkinson c. Meta-Zuckerberg.

Sono riproposte dall’utente (e la Corte partitamente rigetta) tutte le consuete e note causae petendi in tema.   Nulla di nuovo ma un utile loro ripasso.

Inoltre la Corte conferma pure l’applicazione del safe harbour ex  230 CDA.

(notizia e link alla sentenza dal blog di Eric Goldman)

Interessante sentenza dagli USA sulla chiusura immotivata da parte di Facebook dell’account di un’utente

Si tratta della corte del nord california 12 noiv. 2021, 21cv04573EMC , King v-. Facebbok (dal blog di Eric Goldman).

Il provveidmento interessa, dato che la chiusura immotivata di account FB pare non sia così rara.

L’attrice avanza varie domande (una basata sul § 230.c.2.A CDA : incomprensibile, visto che , la disposizione esime da responsabilità anzichè comminarla!, p. 4 segg.)

Qui ricordo la domanda sub E, p. 10 ss basata sulla violazione contrattuale ex fide bona e correttezza.

Rigettata quella sulla distruzione di contenuto (sub 1: non condivisibelmente però: se manca obbligo specifico per F. di conservare, quanto meno la buona fede impone di dare congruo preavviso della prossima distruzione), viene accolta quella sulla mancanza di motivazione,. sub 2, p. 12 ss

F. si basa sulla pattuita clausola <<If we determine that you have clearly, seriously or repeatedly breached our Terms or Policies, including in particular our Community Standards, we may suspend or permanently disable access to your account.>> per affermare che aveva piena discrezionalità

Il giudice ha buon gioco però nel dire che non è così: <<Notably, the Terms of Service did not include language providing that Facebook had “sole discretion” to act.  Compare, e.g., Chen v. PayPal, Inc., 61 Cal. App. 5th 559, 570-71 (2021) (noting that contract provisions allowed “PayPal to place a hold on a payment or on a certain amount in a seller’s account when it ‘believes there may be a high level of risk’ associated with a transaction or the account[,] [a]nd per the express terms of the contract, it may do so ‘at its sole discretion’”; although plaintiffs alleged that “‘there was never any high level of risk associated with any of the accounts of any’ appellants, . . . this ignores that the user agreement makes the decision to place a hold PayPal’s decision – and PayPal’s alone”). 

Moreover, by providing a standard by which to evaluate whether an account should be disabled, the Terms of Service suggest that Facebook’s discretion to disable an account is to be guided by the articulated factors and cannot be entirely arbitrary.  Cf. Block v. Cmty. Nutrition Ins., 467 U.S. 340, 349, 351 (1984) (stating that the “presumption favoring judicial review of administrative action . . . may be overcome by specific language or specific legislative history that is a reliable indicator of congressional intent” – i.e., “whenever the congressional intent to preclude judicial review is ‘fairly discernible in the statutory scheme’”). 

At the very least, there is a strong argument that the implied covenant of good faith and fair dealing imposes ome limitation on the exercise of discretion so as to not entirely eviscerate users’ rights>>

Inoltre (sub 3, p. 14) quanto meot una spiegazione era dovuta. (i passaggi sub 2 e il 3 si sovrappontgono)

In breve sono ritenute illegittime la disbilitgazione e la mancanza di motivazione (che si soprappongono, come appena detto: la reciproca distinzione concettuale richiederebbe troppo spazio e tempo)

Da ultimo, l’ovvia eccezione di safe harbour ex § 230.c.1 CDA <Treatment of publisher or speaker> copre la disabilitazione ma non la mancata spiegazione (p. 22).

Sul secondo punto c’è poco da discutere: il giudice ha ragione.

Più difficile rispondere sul primo,  importante nella pratica, dato che qualunque disabilitazione costituirà -dal punto del disabilitato- una violazione di contratto.

Il giudice dà ragione a F.: il fatto che esista un patto, non toglie a F. il safe harbour : <<although Ms. King’s position is not without any merit, she has glossed over the nature of the “promise” that Facebook made in its Terms of Service. In the Terms of Service, Facebook simply stated that it would use its discretion to determine whether an account should be disabled based on certain standards. The Court is not convinced that Facebook’s statement that it would exercise its publishing discretion constitutes a waiver of the CDA immunity based on publishing discretion. In other words, all that Facebook did here was to incorporate into the contract (the Terms of Service) its right to act as a publisher. This by itself is not enough to take Facebook outside of the protection the CDA gives to “‘paradigmatic editorial decisions not to publish particular content.’” Murphy, 60 Cal. App. 5th at 29. Unlike the very specific one-time promise made in Barnes, the promise relied upon here is indistinguishable from “‘paradigmatic editorial decisions not to publish particular content.’” Id. It makes little sense from the perspective of policy underpinning the CDA to strip Facebook of otherwise applicable CDA immunity simply because Facebook stated its discretion as a publisher in its Terms of Service>>.

Decisione forse esatta sul punto specifico, ma servirebbe analisi ulteriore.

Quattro causae petendi relative al First Amendment/libertà di parola per contrastare la sospensione dell’account Youtube, ma nessuna accolta

Interessante sentenza californiana sulla solita questione della libertà di parola  (First Amendement)  asseritamente violata da sospensione dell’account su social media (politicamente di destra) da parte di una state action.

Si tratta della corte distrettuale di S. Josè, Californa, 19 ottobre 2021, Case No. 20-cv-07502-BLF, Doe c. Google,.

Sono azionate quattro causae petendi, tutte rigettate visto che nessuna è applicabile alla censura/content moderation di Youtube:

1) Public function: curiosamente l’attore e la corte invocano in senso reciprocamente opposto il  noto precedente Prager Univ. c. Google del 2020.

2) Compulsion: <<Rep. Adam Schiff and Speaker of the House Nancy Pelosi and an October 2020 House Resolution, which “have pressed Big Tech” into censoring political speech with threats of limiting Section 230 of the Communications Decency Act (“CDA”) and other penalties.>>. Alquanto inverosimile (è però la più lungamente argometnata)

3) joint action: <<Joint action is present where the government has “so far insinuated itself into a position of interdependence with [a private entity] that it must be recognized as a joint participant in the challenged activity.” Gorenc v. Salt River Project Agr. Imp. and Power Dist., 869 F.2d 503, 507 (9th Cir. 1989) (quoting Burton v. Wilmington Parking Authority, 365 U.S. 715, 725 (1961)). Further, a private defendant must be a “willful participant in joint action with the state or its agents.” Dennis v. Sparks, 449 U.S. 24, 27 (1980). Joint action requires a “substantial degree of cooperative action” between private and public actors. Collins v. Womancare, 878 F.2d 1145, 1154 (9th Cir. 1989).>.

Per gli attori la  joint action theory starebbe in un  <<Twitter exchange between Rep. Schiff and YouTube CEO Susan Wojnicki in which Ms. Wojnicki states, “We appreciate your partnership and will continue to consult with Members of Congress as we address the evolving issues around #COVID19.” FAC, Ex. E at 1; Opp. at 10-15. Plaintiffs argue that this Twitter exchange shows Defendants and the federal government were in an “admitted partnership.”>>. Allegazione un pò leggerina.

4) Governmental nexus: ricorre quando c’è << “such a close nexus between the State and the challenged action that the seemingly private behavior may be fairly treated as that of the State itself.” Kirtley v. Rainey, 326 F.3d 1088, 1094-95 (9th Cir. 2003). “The purpose of this requirement is to assure that constitutional standards are invoked only when it can be said that the State is responsible for the specific conduct of which plaintiff complains.” Blum, 457 U.S. at 1004-1005>>. (sembra assai simile alla prcedente).

Non avendo accolto alcuna di quesrta, non affronta il safe harbour ex 230 CDA, p. 12. Curioso l’rodine logico : il criterio della ragine più liquidqa avrebbe potuto a rigttare (nel merito) con tale norma.

(sentenza e link dal blog di Eric Goldman)

Corresponsabilità delle puiattaforme digitali per la strage di Orlando (Florida, USA) del 2016? No

Nella strage di Orlando USA del 2016 Omar Mateen uccise 49 persone e ne ferì 53 con un fucile semiautomatico, inneggiando all’ISIS.

Le vittime proposero domanda giudiziale contro Twitter Google e Facebook sia in base Anti-Terrorism Act, 18 U.S.C. §§ 2333(a) & (d)(2) (è respponsabile chi , by facilitating his access to radical jihadist and ISIS-sponsored content in the months and years leading up to the shooting) sia per legge statale, avendo cagionato  negligent infliction of emotional distress and wrongful death.

La cit. legge ATA imposes civil liability on “any person who aids and abets, by knowingly providing substantial assistance, or who conspires with the person who committed . . . an act of international terrorism,” provided that the “act of international terrorism” is “committed, planned, or authorized” by a designated “foreign terrorist organization.

Nega ogni responsabilità in capo alle piattaforme (confermando la sentenza di primo grado della Florida) la corte d’appello dell’11° circuito 27.09.2021, No. 20-11283 , Colon ed altri c. Twitter-Facebook-Google.

La prima domanda è respinta sia perchè non si trattò di terrorismo internazionale (pur se reclamato dal’lISIS), come richiede la cit legge, sia perchè non fu una foreign terroristic organization a commetterlo (ma un c.d. lupo solitario).

Ma soprattutto è rigettata la seconda domanda (negligenza nel causare danni e decdessi) : gli attori non hanno superato la prova della proximate causation circa il ruolo delle puiattaforme, sub IV.A, p. 21 ss

La corte parla si del nesso di causalità ma in astratto e in base ai precedenti, senza applicarlo al ruolo delle piattaforme nella commissione di delitti.

La corte stranamente non menziona il safe harbour ex 230 CDA che avrebbe potuto essere invocato (cosa che quasi certanente le piattafirme avranno fatto)

(notizia e link dal blog di Eric Goldman)

La proprietà intellettuale, cui non si applica il safe harbour ex 230 CDA, comprende pure il right of publicity

Una giornalista vede la propria immagine riprodotta illecitamente in Facebook e nel social Imgur, cui portava un link presente in Reddit.

Cita tute le piattaforme per violazione del right of publicity (r.o.f.) ma queste invocano il § 230 CDA.

Il quale però non si applica alla intellectual property (IP) (§ 230.e.2).

Per le piattaforme il right of publicity è altro dall ‘ IP e dunque il safe harbour può operre.

La pensa allo stesso modo il giudice di primo grado.

Per la corte di appello del 3° circuito, invece, vi rientra appieno: quindi il safe harbour non opera (sentenza Hepp c. Facebook, Reddit, Imgur e altri, N° 202725 & 2885, 23.09.2021)

I dizionari -legali e non- alla voce <intellectual property> indirettamente comprendono il r.o.f. (p. 18-19): spt. perchè vi includono i marchi, cui il r.o.f. va assimilato.

(sub D infine il collegio si premura di chiarire che non ci saranno conseguenze disastrose da questa presa di posizine, apparentemente contro la comunicazione in internet via piattaforme)

(testo e link alla sentenza dal blog di Eric Goldman)

Raccolta, a fini di successiva vendita, di informazioni personali altrui: right of publicity e safe harbour ex 230 CDA

La corte distrettuale del Nord California, 16.08.2021, 21cv01418EMC , Cat Brooks e altri c. THOMSON REUTERS CORPORATION (poi, TR), decide la lite iniziata dai primi per raccolta e sucessiva vendita a terzi di loro dati personali.

Il colosso dell’informazione TR , data broker, raccoglieva e vendeva informazioni altrui a imprese interessate (si tratta della piattaforma CLEAR).

Precisamente: Thomson Reuters “aggregates both public and nonpublic information about millions of people” to create “detailed cradletograve dossiers on each person, including names, photographs, criminal history, relatives, associates, financial information, and employment information.” See Docket No. 11 (Compl.) ⁋ 2. Other than publicly available information on social networks, blogs, and even chat rooms, Thomson Reuters also pulls “information from thirdparty data brokers and law enforcement agencies that are not available to the general public, including live cell phone records, location data from billions of license plate detections, realtime booking information from thousands of facilities, and millions of historical arrest records and intake photos.”

1) Tra le vari causae petendi, considero il right of publicity.

La domanda è rigettata non tanto perchè non ricorra l’uso (come allegato da TR) , quanto perchè non ricorre l'<Appropriation of Plaintiffs’ Name or Likeness For A Commercial Advantage>: Although the publishing of Plaintiffs’ most private and intimate information for profit might be a gross invasion of their privacy, it is not a misappropriation of their name or likeness to advertise or promote a separate product or servic, p. 8.

2) safe harbour ex § 230 CDA, invocato da TR

Dei tre requisiti necessari (“(1) a provider or user of an interactive computer service (2) whom a plaintiff seeks to treat, under a state law cause of action, as a
publisher or speaker (3) of information provided by another information content
provider.”
), TR non ha provato la ricorrenza del 2 e del 3.

Quanto al 2, la giurisprudenza insegna che <<a plaintiff seeks to treat an interactive computer service as a “publisher or speaker” under § 230(c)(1) only when it is asking that service to “review[], edit[], and decid[e] whether to publish or withdraw from publication thirdparty content.” Id. (quoting Barnes, 570 F.3d at 1102). Here, Plaintiffs are not seeking to hold Thomson Reuters liable “as the publisher or speaker” because they are not asking it to monitor thirdparty content; they are asking to moderate its own conten>>

Quanto al requisito 3, l’informazione non è fornita da terzi ma da TR: the “information” at issue herethe dossiers with Plaintiffs’ personal informationis not “provided by another information content provider.” 47 U.S.C. § 230(c)(1). In Roomates.com, the panel explained that § 230 was passed by Congress to “immunize[] providers of interactive computer services against liability arising from content created by third parties.” 521 F.3d at 1162 (emphasis added). The whole point was to allow those providers to “perform some editing on usergenerated content without thereby becoming liable for all defamatory or otherwise unlawful messages that they didn’t edit or delete. In other  words, Congress sought to immunize the removal of usergenerated content, not the creation of content.” Id. at 1163 (emphases added). Here, there is no usergenerated contentThomson Reuters generates all the dossiers with Plaintiffs’ personal information that is posted on the CLEAR platform. See Compl. ⁋⁋ 13. In other words, Thomson Reuter is the “information content provider” of the CLEAR dossiers because it is “responsible, in whole or in part, for the creation or development of” those dossiers. 47 U.S.C. § 230(f)(3). It is nothing like the paradigm of an interactive computer service that permits posting of content by third parties.

Discriminazione nelle ricerche di alloggi via Facebook: manca la prova

Una domanda di accertamento di violazione del Fair Housing Act e altre leggi analoghe statali (carenza di esiti – o ingiustificata differenza di esiti rispetto ad altro soggetto di diversa etnia- dalle ricerche presuntivamente perchè eseguite da account di etnia c.d. Latina) è rigettata per carenza di prova.

Da noi si v. spt. il d. lgs. 9 luglio 2003 n. 216 e  il d . lgs. di pari data n° 215 (autore di riferimento sul tema è il prof. Daniele Maffeis in moltri scritti tra cui questo).

Nel mondo anglosassone , soprattutto statunitense, c’è un’enormità di scritti sul tema: si v. ad es. Rebecca Kelly Slaughter-Janice Kopec & Mohamad Batal, Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission, Yale Journal of Law & Technology

Il giudice così scrive:

<In sum, what the plaintiffs have alleged is that they each used Facebook to search for housing based on identified criteria and that no results were returned that met their criteria. They assume (but plead no facts to support) that no results were returned because unidentified advertisers theoretically used Facebook’s Targeting Ad tools to exclude them based on their protected class statuses from seeing paid Ads for housing that they assume (again ,with no facts alleged in support) were available and would have otherwise met their criteria. Plaintiffs’ claim  that Facebook denied them access to unidentified Ads is the sort of generalized grievance that is insufficient to support standing. See, e.g., Carroll v. Nakatani, 342 F.3d 934, 940 (9th Cir. 2003) (“The Supreme Court has repeatedly refused to recognize a generalized grievance against allegedly illegal government conduct as sufficient to confer standing” and when “a government  actor discriminates on the basis of race, the resulting injury ‘accords a basis for standing only to those persons who are personally denied equal treatment.’” (quoting Allen v. Wright, 468 U.S. 737, 755 (1984)).9 Having failed to plead facts supporting a plausible injury in fact sufficient to confer standing on any plaintiff, the TAC is DISMISSED with prejudice>.

Così il Northern District of California 20 agosto 2021, Case 3:19-cv-05081-WHO , Vargas c. Facebook .

Il quale poi dice che anche rigattando quanto sorpa, F. srebbe protetta dal safe harbour ex § 230 CDA e ciò nonostante il noto precedente Roommates del 2008, dal quale il caso sub iudice si differenzia:

<<Roommates is materially distinguishable from this case based on plaintiffs’ allegations in the TAC that the nowdefunct Ad Targeting process was made available by Facebook for optional use by advertisers placing a host of different types of paidadvertisements.10 Unlike in Roommates where use of the discriminatory criteria was mandated, here use of the tools was neither mandated nor inherently discriminatory given the design of the tools for use by a wide variety of advertisers.

In Dyroff, the Ninth Circuit concluded that tools created by the website creator there, “recommendations and notifications” the website sent to users based on the users inquiries that ultimately connected a drug dealer and a drug purchaser did not turn the defendant who ontrolled the website into a content creator unshielded by CDA immunity. The panel confirmed that the tools were “meant to facilitate the communication and content of others. They are not content in and of themselves.” Dyroff, 934 F.3d 1093, 1098 (9th Cir. 2019), cert. denied, 140 S. Ct. 2761 (2020); see also Carafano v. Metrosplash.com, Inc., 339 F.3d 1119, 1124 (9th Cir. 2003) (where website “questionnaire facilitated the expression of information by individual users” including proposing sexually suggestive phrases that could facilitate the development of libelous profiles, but left “selection of the content [] exclusively to the user,” and defendant was not “responsible, even in part, for associating certain multiple choice responses with a set of physical characteristics, a group of essay answers, and a photograph,” website operator was not information content provider falling outside Section 230’s immunity); Goddard v. Google, Inc., 640 F. Supp. 2d 1193, 1197 (N.D. Cal. 2009) (no liability based on Google’s use of “Keyword Tool,” that  employs “an algorithm to suggest specific keywords to advertisers”).  

Here, the Ad Tools are neutral. It is the users “that ultimately determine what content to  post, such that the tool merely provides ‘a framework that could be utilized for proper or improper  purposes, . . . .’” Roommates, 521 F.3d at 1172 (analyzing Carafano). Therefore, even if the plaintiffs could allege facts supporting a plausible injury, their claims are barred by Section 230.>>

(notizia e link alla sentenza dal blog di Eric Goldman)

Ancora sugli annuari on line che usano dati personali degli ex studenti

In Knapke v. Peopleconnect Inc , 10.08.2021, un Tribunale di Washington decide una lite sul right of publicity sfruttato indebitamente dall’annuario Classmates (C.) (nella fattisecie proponendo nome e immagine in niserzioni publiciitarie).

C. pubblica annuari di scuola e università, parte gratjuitamente (ma con pubblicità) e parte a pagamento.

C. si difende strenuamente ma la corte rigetta la domanda di dismiss.

E’ rigettata l’eccezione di safe harbour ex 230 CDA, trattandosi di materiale proprio e non di soggetti terzi.

Inoltre si v. le analitiche difese di C..

La più interessante è basata sul First Amendment: <<Classmates argues that “where a person’s name,  image, or likeness is used in speech for ‘informative or cultural’ purposes, the First Amendment renders the use ‘immune’ from liability.”>> (sub F).

La corte però la rigetta.

Avevo già dato notizia mesi fa di altro caso relativo agli annuari, CALLAHAN v.
ANCESTRY.COM INC..

(notizia e link tratti dal blog di Eric Goldman)

Azione in corte di Trump contro i colossi digitali che lo esclusero dai social (ancora su social networks e Primo Emendamento)

Techdirt.com pubblica l’atto di citazione di Trump 7 luglio 2021 contro Facebook (Fb)   che nei mesi scorsi lo bannò.  E’ una class action.

Il link diretto è qui .

L’atto è interessante e qui ricordo solo alcuni punti sull’annosa questione del rapporto social networks/primo emendamento.

Nella introduction c’è la sintesi di tutta l’allegazione, pp. 1-4.

A p. 6 ss trovi descrizione del funzionamneot di Fb e dei social: interessa spt. l’allegazione di coordinamento tra Fb e Tw, § 34 e la piattaforma CENTRA per il monitoraggio degli utenti completo cioè  anche circa la loro attività su altre piattaforme ,  § 36 ss. .

 Alle parti III-IV-V l’allegazione sul coordinamenot (anche forzoso, sub III, § 56)  tra Stato  Federale e piattaforme.  Il che vale a preparare il punto centrale seguente: l’azione di Fb costituisce <State action> e dunque non può censurare il free speech:

<<In censoring the specific speech at issue in this lawsuit and deplatforming Plaintiff, Defendants were acting in concert with federal officials, including officials at the CDC and the Biden transition team. 151.As such, Defendants’ censorship activities amount to state action. 152.Defendants’ censoring the Plaintiff’s Facebook account, as well as those Putative Class Members, violates the First Amendment to the United States Constitution because it eliminates the Plaintiffs and Class Member’s participation in a public forum and the right to communicate to others their content and point of view. 153.Defendants’ censoring of the Plaintiff and Putative Class Members from their Facebook accounts violates the First Amendment because it imposes viewpoint and content-based restrictions on the Plaintiffs’ and Putative Class Members’ access to information, views, and content otherwise available to the general public. 154.Defendants’ censoring of the Plaintiff and Putative Class Members violates the First Amendment because it imposes a prior restraint on free speech and has a chilling effect on social media Users and non-Users alike. 155.Defendants’ blocking of the Individual and Class Plaintiffs from their Facebook accounts violates the First Amendment because it imposes a viewpoint and content-based restriction on the Plaintiff and Putative Class Members’ ability to petition the government for redress of grievances. 156.Defendants’ censorship of the Plaintiff and Putative Class Members from their Facebook accounts violates the First Amendment because it imposes a viewpoint and content-based restriction on their ability to speak and the public’s right to hear and respond. 157.Defendants’ blocking the Plaintiff and Putative Class Members from their Facebook accounts violates their First Amendment rights to free speech. 158.Defendants’ censoring of Plaintiff by banning Plaintiff from his Facebook account while exercising his free speech as President of the United States was an egregious violation of the First Amendment.>> (al § 159 ss sul ruolo di Zuckerberg personalmente).

Ne segue che il safe harbour ex § 230 CDA è incostituzionale:

<<167.Congress cannot lawfully induce, encourage or promote private persons to accomplish what it is constitutionally forbidden to accomplish.” Norwood v. Harrison, 413 US 455, 465 (1973). 168.Section 230(c)(2) is therefore unconstitutional on its face, and Section 230(c)(1) is likewise unconstitutional insofar as it has interpreted to immunize social media companies for action they take to censor constitutionally protected speech. 169.Section 230(c)(2) on its face, as well as Section 230(c)(1) when interpreted as described above, are also subject to heightened First Amendment scrutiny as content- and viewpoint-based regulations authorizing and encouraging large social media companies to censor constitutionally protected speech on the basis of its supposedly objectionable content and viewpoint. See Denver Area Educational Telecommunications Consortium, Inc. v. FCC, 518 U.S. 727 (1996).170.Such heightened scrutiny cannot be satisfied here because Section 230 is not narrowly tailored, but rather a blank check issued to private companies holding unprecedented power over the content of public discourse to censor constitutionally protected speech with impunity, resulting in a grave threat to the freedom of expression and to democracy itself; because the word “objectionable” in Section 230 is so ill-defined, vague and capacious that it results in systematic viewpoint-based censorship of political speech, rather than merely the protection of children from obscene or sexually explicit speech as was its original intent; because Section 230 purports to immunize social media companies for censoring speech on the basis of viewpoint, not merely content; because Section 230 has turned a handful of private behemoth companies into “ministries of truth” and into the arbiters of what information and viewpoints can and cannot be uttered or heard by hundreds of millions of Americans; and because the legitimate interests behind Section 230 could have been served through far less speech-restrictive measures. 171.Accordingly, Plaintiff, on behalf of himself and the Class, seeks a declaration that Section 230(c)(1) and (c)(2) are unconstitutional insofar as they purport to immunize from liability social media companies and other Internet platforms for actions they take to censor constitutionally protected speech>>.

Come annunciato, ha fatto partire anche analoghe azioni verso Twitter e verso Google/Youtube e rispettivi amministratori delegati (rispettivi link  offerti da www.theverge.com) .