Il software di Salesforce , per ottimizzare la gestione di una piattaforma di marketplace, è coperto dal safeharbour ex § 230 CDA

In Backpage.com (piattaforma di compravendite rivale di craiglist) comparivano anche molti annunci a sfondo sessuale.

A seguito di uno di questi, una minorennme (tredicennne all’epoca!) cadeva vittoma di predatori.

Agiva quindi assieme alla madre in giudizio contro Salesforce (poi: S.) per aver collaborato e tratto utile economico dagli incarichi ricevuti da Backpage, relativi alla collaborazione nella gestione online dei contatti con i suoi utenti.

Il distretto nord dell’Illinois, eastern div., Case: 1:20-cv-02335 , G.G. (minor) v. Saleforce.com inc., 16 maggio 2022, accoglie l'(immancabile) eccezione di S. della fruibilità del predetto safeharbour.

Il punto è trattato con buona analisi sub I Section 230, pp. 6-24.

La corte riconocse che S. sia in interactive computer service, , sub A, p. 8 ss: difficilmente contestabile.

Riconosce anche che S. sia chiamato in giudizio come publisher, sub B, p. 13 ss: asserzione, invece, meno scontata.

Chi collabora al fatto dannoso altrui (sia questi un publisher -come probabilmente Backpage- oppure no) è difficile possa essere ritenuto publisher: a meno di dire che lo è in quanto la sua condotta va qualificata col medesimo titolo giuridico gravante sul soggetto con cui collabora (si v. da noi l’annosa questione del titolo di responsabilità per concorrenza sleale ex art. 2598 cc in capo al terzo privo della qualità di imprenditore concorrente).

(notizia e link alla sentenza dal blog del prof. Eric Goldman)

Attualmente il sito web di Backpage.com è sotto sequestro della forza pubblica statunitense. Compare questo:

Ancora sull’applicabilità del safe harbour ex § 230 CDA alla rimozione/sospensione di contenuti dell’attore

In un  caso di lite promossa da dissidente dal governo arabo a causa della mancata protezione del proprio account da hacker governativi arabi e successiva sua sospensione, interviene il distretto nord della california 20.05.2022m, case 3:21-cv-08017-EMC, Al-Hamed c.- Twitter e altri. 

Le domanda proposte erano molte: qui ricordo solo la difesa di Tw. basata sull’esimente in oggetto.

La corte concede a Twitter il safe harbour ex § 230.c.1,  ricorrendone i tre requisiti:

– che si tratti di internet service provider,

– che si qualifichi il convenuto come publisher/speaker,

– che riguardi contemnuti non di Tw. ma di terzi .

E’ quest’ultimo il punto meno chiaro (di solito la rimozione/sospensione riguarda materiale offensivo contro l’attore e caricato da terzi)  : ma la corte chiarisce che la sospensione di contenuti del ricorrente è per definizione sospensione di conteuti non di Tw e quindi di terzi (rispetto al solo  Tw. , allora, non certo rispetto all’attore).

Ricorda però che sono state emesse opinioni diverse: <<Some courts in other districts have declined to extend Section 230(c)(1) to cases in which the user brought claims based on their own content, not a third party’s, on the ground that it would render the good faith requirement of Section 230(c)(2) superfluous. See, e.g., e-ventures Worldwide, LLC v. Google, Inc., No. 2:14-cv-646-FtM-PAM-CM, 2017 WL 2210029, at *3 (M.D. Fl. Feb. 8, 2017). However, although a Florida court found the lack of this distinction to be problematic, it also noted that other courts, including those in this district, “have found that CDA immunity attaches when the content involved was created by the plaintiff.” Id. (citing Sikhs for Just., Inc. v. Facebook, Inc., 697 F. App’x 526 (9th Cir. 2017) (affirming dismissal of the plaintiff’s claims based on Facebook blocking its page without an explanation under Section  230(c)(1)) >> (e altri casi indicati).

Si tratta del passaggio più interessante sul tema.

(notizia e link alla sentenza dal blog del prof. Eric Goldman).

Chi ritweetta un post lesivo è coperto dal safe harbour ex § 230 CDA? Pare di si

La Corte Suprema del New Hampshire, opinion 11.05.2022, Hillsborough-northern judicial district No. 2020-0496 , Banaian c. Bascom et aa., affronta il tema e risponde positivamente.

In una scuola situata a nord di Boston, uno studente aveva hackerato il sito della scuola e aveva inserito post offensivi, suggerenti che una docente fosse  “sexually pe[r]verted and desirous of seeking sexual liaisons with Merrimack Valley students and their parents.”

Altro studente tweetta il post e altri poi ritweettano (“ritwittano”, secondo Treccani) il primo tweet.

La docente agisce verso i retweeters , i quali però eccepiscono il safe harbour ex § 230.c)  CDA.  Disposizione che così recita:

<<c) Protection for “Good Samaritan” blocking and screening of offensive material.

(1) Treatment of publisher or speaker

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.>>.

La questione giuridica è se nel concetto di user rientrino gli alunni del caso  sub iudice.

La SC conferma che è così. Del resto sarebbe assai difficile ragionare diversamente.

Precisamente: << We are persuaded by the reasoning set forth in these cases. The plaintiff identifies no case law that supports a contrary result. Rather, the plaintiff argues that because the text of the statute is ambiguous, the title of section 230(c) — “Protection for ‘Good Samaritan’ blocking and screening of offensive material” — should be used to resolve the ambiguity. We disagree, however, that the term “user” in the text of section 230 is ambiguous. See Webster’s Third New International Dictionary 2524 (unabridged ed. 2002) (defining “user” to mean “one that uses”); American Heritage Dictionary of the English Language 1908 (5th ed. 2011) (defining “user” to mean “[o]ne who uses a computer, computer program, or online service”). “[H]eadings and titles are not meant to take the place of the detailed provisions of the text”; hence, “the wise rule that the title of a statute and the heading of a section cannot limit the plain meaning of the text.” Brotherhood of R.R. Trainmen v. Baltimore & O.R. Co., 331 U.S. 519, 528-29 (1947). Likewise, to the extent the plaintiff asserts that the legislative history of section 230 compels the conclusion that Congress did not intend “users” to refer to individual users, we do not consider legislative history to construe a statute which is clear on its face. See Adkins v. Silverman, 899 F.3d 395, 403 (5th Cir. 2018) (explaining that “where a statute’s text is clear, courts should not resort to legislative history”).

Despite the plaintiff’s assertion to the contrary, we conclude that it is evident that section 230 of the CDA abrogates the common law of defamation as applied to individual users. The CDA provides that “[n]o cause of action may be brought and no liability may be imposed under any State or local law that is inconsistent with this section.” 47 U.S.C. § 230(e)(3). We agree with the trial court that the statute’s plain language confers immunity from suit upon users and that “Congress chose to immunize all users who repost[] the content of others.” That individual users are immunized from claims of defamation for retweeting content that they did not create is evident from the statutory language. See Zeran v. America Online, Inc., 129 F.3d 327, 334 (4th Cir. 1997) (explaining that the language of section 230 makes “plain that Congress’ desire to promote unfettered speech on the Internet must supersede conflicting common law causes of action”).
We hold that the retweeter defendants are “user[s] of an interactive computer service” under section 230(c)(1) of the CDA, and thus the plaintiff’s claims against them are barred. See 47 U.S.C. § 230(e)(3). Accordingly, we  uphold the trial court’s granting of the motions to dismiss because the factspled in the plaintiff’s complaint do not constitute a basis for legal relief.
>>

(notizia della e link alla sentenza dal blog del prof. Eric Goldman)

Il blocco dell’account Twitter per post ingannevoli o fuorvianti (misleading) è coperto dal safe harbour ex § 230 CDA

Il distretto nord della California con provv. 29.04.2022, No. C 21-09818 WHA, Berenson v. Twitter, decide la domanda giudiziale allegante un illegittimo blocco dell’account per post fuorvianti (misleading) dopo la nuova Twitter policy five-strike in tema di covid 19.

E la rigetta, riconoscendo il safe harbour ex § 230.c.2.a del CDA.

A nulla valgono le allegazioni attoree intorno alla mancanza di buona fede in Twitter: << With the exception of the claims for breach of contract and promissory estoppel, all claims in this action are barred by 47 U.S.C. Section 230(c)(2)(A), which provides, “No provider or user of an interactive computer service shall be held liable on account of — any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” For an internet platform like Twitter, Section 230 precludes liability for removing content and preventing content from being posted that the platform finds would cause its users harm, such as misinformation regarding COVID-19. Plaintiff’s allegations regarding the leadup to his account suspension do not provide a sufficient factual underpinning for his conclusion Twitter lacked good faith. Twitter constructed a robust five-strike COVID-19 misinformation policy and, even if it applied those strikes in error, that alone would not show bad faith. Rather, the allegations are consistent with Twitter’s good faith effort to respond to clearly objectionable content posted by users on its platform. See Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1105 (9th Cir. 2009); Domen v. Vimeo, Inc., 433 F. Supp. 3d 592, 604 (S.D.N.Y. 2020) (Judge Stewart D. Aaron)>>.

Invece non  rientrano nella citata esimente (quindi la causa prosegue su quelle) le domande basate su violazione contrattuale e promissory estoppel.

La domanda basata sulla vioalzione del diritto di parola è pure respinta per il solito motivo della mancanza di state action, essendo Tw. un  ente privato: <<Aside from Section 230, plaintiff fails to even state a First Amendment claim. The free speech clause only prohibits government abridgement of speech — plaintiff concedes Twitter is a private company (Compl. ¶15). Manhattan Cmty. Access Corp. v. Halleck, 139 S. Ct. 1921, 1928 (2019). Twitter’s actions here, moreover, do not constitute state action under the joint action test because the combination of (1) the shift in Twitter’s enforcement position, and (2) general cajoling from various federal officials regarding misinformation on social media platforms do not plausibly assert Twitter conspired or was otherwise a willful participant in government action. See Heineke v. Santa Clara Univ., 965 F.3d 1009, 1014 (9th Cir. 2020).  For the same reasons, plaintiff has not alleged state action under the governmental nexus test either, which is generally subsumed by the joint action test. Naoko Ohno v. Yuko Yasuma, 723 F.3d 984, 995 n.13 (9th Cir. 2013). Twitter “may be a paradigmatic public square on the Internet, but it is not transformed into a state actor solely by providing a forum for speech.” Prager Univ. v. Google LLC, 951 F.3d 991, 997 (9th Cir. 2020) (cleaned up, quotation omitted). >>

(notizia e link alla sentenza dal blog del prof. Eric goldman)

Ritwittare aggiungendo commenti diffamatori non è protetto dal safe harbour ex 230 CDA

Byrne è citato per diffamazione da Us Dominion (azienda usa che fornisce software per la gestione dei processi elettorali) per dichiaraizoni e tweet offensivi.

Egli cerca l’esimente del safe harbour ex 230 CDA ma gli va male: è infatti content provider.

Il mero twittare un link (a materiale diffamatorio) pootrebbe esserne coperto: ma non i commenti accompagnatori.

Così il Trib. del District of Columbia 20.04.4022, Case 1:21-cv-02131-CJN, US Dominion v. Byrne: <<A so-called “information content provider” does not enjoy immunity under § 230.   Klayman v. Zuckerberg, 753 F.3d 1354, 1356 (D.C. Cir. 2014). Any “person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service” qualifies as an “information content provider.” 47 U.S.C. § 230(f)(3); Bennett, 882 F.3d at 1166 (noting a dividing line between service and content in that ‘interactive computer service’ providers—which are generally eligible for CDA section 230 immunity—and ‘information content provider[s],’ which are not entitled to immunity”).
While § 230 may provide immunity for someone who merely shares a link on Twitter,
Roca Labs, Inc. v. Consumer Opinion Corp., 140 F. Supp. 3d 1311, 1321 (M.D. Fla. 2015), it does not immunize someone for making additional remarks that are allegedly defamatory, see La Liberte v. Reid, 966 F.3d 79, 89 (2d Cir. 2020). Here, Byrne stated that he “vouch[ed] for” the evidence proving that Dominion had a connection to China. See
Compl. ¶ 153(m). Byrne’s alleged statements accompanying the retweet therefore fall outside the ambit of § 230 immunity>>.

Questione non difficile: che il mero caricamente di un link sia protetto, è questione interessante; che invece i commenti accompagnatori ingiuriosi rendano l’autore un content provider, è certo.

La violazione contrattuale è coperta da safe harbour editoriale ex § 230 CDA?

La questione è sfiorata dalla Appellate Division di New York 22.03.2022, 2022 NY Slip Op 01978, Word of God Fellowship, Inc. v Vimeo, Inc., ove l’attore agisce c. Vimeo dopo aver subito la rimozione di video perchè confusori sulla sicurezza vaccinale.

L’importante domanda trova a mio parere risposta negativa: la piattaforma non può invocare il safe harbour se viola una regola contrattuale che si era assunta liberamente.

Diverso è se, come nel caso de quo, il contratto di hosting preveda la facoltà di rimuovere: ma allora il diritto di rimozione ha base nel contratto e non nell’esimente da safe harbour

(notizia della sentenza e link dal blog del prof. Eric Goldman)

La diffamazione, per avere pubblicato su Facebook le email aggressive ricevute, non è coperto da safe harbour ex 230 CDA

La diffamazione per aver pubblicato su Facebbok le email aggressive/offensive ricevute non è coperto sal safe harbour ex 230 CDA:  essenzialmente perchè non si tratta di materiali  di terzi che questi volevano pubblicare in internet ma di sceltga del destinatario delle email.

Questa la decisione dell’Eastern district of California, 3 marzo 2022, Crowley ed altri c. Faison ed altri, Case 2:21-cv-00778-MCE-JDP .

Si tratta della pubblicazione da parte della responsabile locale in Sacramento del movimnto Black Lives Matter delle email che  aveva ricevuto.

Passo pertinente: <<Defendants nonetheless ignore certain key distinctions that make their reliance on the Act problematic.

Immunity under § 230 requires that the third-party provider, herethe individual masquerading as Karra Crowley, have “provided” the emails to Defendants“for use on the Internet or another interactive computer service.” Batzel, 333 F.3d at1033 (emphasis in original).

Here, as Plaintiffs point out, the emails were sent directly to BLM Sacramento’s general email address. “[I]f the imposter intended for his/her emailsto be posted on BLM Sacramento’s Facebook page, the imposter could have posted theemail content directly to the Facebook page,” yet did not do so. Pls.’ Opp to Mot. toStrike, 18:9-11 (emphasis in original). Those circumstances raise a legitimate questionas to whether the imposter indeed intended to post on the internet, and without a findingto that effect the Act’s immunity does not apply. These concerns are further amplified by the fact that Karra Crowley notifiedDefendants that she did not author the emails, and they did not come from her emailaddress within 24 hours after the last email attributed to her was posted. Defendantsnonetheless refused to take down the offending posts from its Facebook page, causingthe hateful and threatening messages received by Plaintiffs to continue.

As set forthabove, one of the most disgusting of those messages, in which the sender graphicallydescribed how he or she was going to kill Karra Crowley and her daughter, was sentnearly a month later.In addition, while the Act does provide immunity for materials posted on theinternet which the publisher had no role in creating, here Defendants did not simply postthe emails. They went on to suggest that Karra Crowley “needs to be famous” andrepresented that her “information has been verified”, including business and homeaddresses. Compl., ¶¶ 13-14.6 It is those representations that Plaintiffs claim arelibelous, particularly after Defendants persisted in allowing the postings to remain evenafter they had been denounced as false, a decision which caused further harassmentand threats to be directed towards Plaintiffs.

As the California Supreme Court noted inBarrrett, Plaintiffs remain “free under section 230 to pursue the originator of a defamatory Internet publication.” 40 Cal. 4th at 6>>

Visto il dettato della norma, difficile dar torto al giudice californiano.

Si noti che ad invocare il safe harbour non è una piattaforma digitale, come capita di solito, ma un suo utilizzatore: cosa perfettamente legittima, però, visto il dettato normativo.

(notizia e link alla sentenza dal blog del prof. Eric Goldman)

Safe harbour ex 230 CDA per l’omesso avviso e l’omessa rimozione di materiale sensibile? Si.

La madre di un bambino le cui immagini sessualmente significative avevva notato caricate su Tikl Tok cita la piattaforma per i segg. illeciti: did not put any warning on any of the videos claiming they might contain sensitive material; did not remove any of the videos from its platform; did not report the videos to any child abuse hotline; did not sanction, prevent, or discourage the videos in any way from being viewed, shared, downloaded or disbursed in any other way; and “failed to act on their own policies and procedures along with State and Federal Statutes and Regulations.

Il distretto nord dell’Illinois, west. division, 28.02.2022, Case No. 21 C 50129, Day c. Tik Tok, accoglie l’eccezione di safe harbour ex § 230 CDA sollevata dalla piattaforma (e citando il noto precedente Craiglist del 2008):

What § 230(c)(1) says is that an online information system must not ‘be treated as the publisher or speaker of any information provided by’ someone else.” Chicago Lawyers’ Committee for Civil Rights Under Law, Inc. v. Craigslist, Inc., 519 F.3d 666, 671 (7th Cir. 2008).
In Chicago Lawyers’, plaintiff sought to hold Craigslist liable for postings made by others on its platform that violated the anti-discrimination in advertising provision of the Fair Housing Act (42 U.S.C. § 3604(c)). The court held 47 U.S.C. § 230(c)(1) precluded Craigslist from being  liable for the offending postings because “[i]t is not the author of the ads and could not be treated as the ‘speaker’ of the posters’ words, given § 230(c)(1).” Id. The court rejected plaintiff’s argument that Craigslist could be liable as one who caused the offending post to be made stating “[a]n interactive computer service ‘causes’ postings only in the sense of providing a place where people can post.” Id. “Nothing in the service craigslist offers induces anyone to post any particular listing or express a preference for discrimination.” Id. “If craigslist ‘causes’ the discriminatory notices, then, so do phone companies and courier services (and, for that matter, the firms that make the computers and software that owners use to post their notices online), yet no one could think that Microsoft and Dell are liable for ‘causing’ discriminatory advertisements.” Id. at 672. The court concluded the opinion by stating that plaintiff could use the postings on Craigslist to identify targets to investigate and “assemble a list of names to send to the Attorney General for prosecution. But given § 230(c)(1) it cannot sue the messenger just because the message reveals a third party’s plan to engage in unlawful discrimination.”

Ed allora la domanda attorea nel caso specifico < does not allege defendant created or posted the videos. It only alleges defendant allowed and did not timely remove the videos posted by someone else. This is clearly a complaint about “information provided by another information content provider” for which defendant cannot be held liable by the terms of Section 230(c)(1).>

Difficile dar torto alla corte, alla luce del dettato della disposizione invocata da TikTok

(notizia e link alla sentenza dal blog del prof. Eric Goldman)

Google non è responsabile per la presenza di app ad uso illecito nel suo play-store, stante il safe harbour ex 230 CDA

Un ex ambasciatore statunitense, di religione ebraica, chiede l’accetameonto di responsabilità di Google perchè permette la presenza sul PlayStore di un social (Telegram) notoriamente usato -anche- da estremisti autori di propaganda antisemita.

In particollare afferma che G. non fa rispetare la propria policy vincolante i creatori di app sullo  Store.

La corte californiana U.S. D C NORTHERN DISTRICT OF CALIFORNIA , SAN JOSE DIVISION, Case No. 21-cv-00570-BLF, Ginsberg c .Google, 18.02.2022, però ,accoglie l’eccezione di safe harbour ex 230 CDA sollevata da Google.

Dei tre requisiti chiesti allo scopo (che sia un service provider; che sia chiamato come Publisher; che si tratti di informazione di terzi), è il secondo quello di solito più litigato.

Ma giustamente la corte lo ravvisa anche in questo caso: <<In the present case, Plaintiffs’ claims are akin to the negligence claim that the Barnes court found to be barred by Section 230. Plaintiffs’ theory is that by creating and publishing guidelines for app developers, Google undertook to enforce those guidelines with due care, and can be liable for failing to do so with respect to Telegram. As in Barnes, however, the undertaking that Google allegedly failed to perform with due care was removing offending content from the Play Store.
But removing content is something publishers do, and to impose liability on the basis of such conduct necessarily involves treating the liable party as a publisher of the content it failed to remove. Barnes, 570 F.3d at 1103. Plaintiffs in the present case do not allege the existence of a contract or indeed any interaction between themselves and Google. Plaintiffs do not allege that Ambassador Ginsberg purchased his smartphone from Google or that he downloaded Telegram or any other app from the Play Store. Thus, the Barnes court’s rationale for finding that Section 230 did not bar Barnes’ promissory estoppel claim is not applicable here.
>>

(notizia a link alla sentenza dal blog del prof. Eric Goldman)

Safe harbour ex 230 CDA per Armslist, piattaforma per vendita on line di armi? Questione dubbia

Due corti statunitensi negano il safe harbour ex 230 CDA alla piattaforma di vendita di armi Armslist  , non trattandosi di azioni in cui son considerati editori/publisher/speaker .

Si trattava di responsabilità consguente ad uccisioni cagionate tramite armi da fuoco acquistate su Armslist: la quale sarebbe stata negligente nel permettere tale commercio incontrollato, avendo implementato  un software inadeguato alla base del proprio marketplace.

 Si tratta di due tribunali del Wisconsin, east. dis.:

1) BAUER and ESTATE OF PAUL BAUER v. ARMSLIST, del 19.11.2021, caso 20-cv-215-pp, sub V.B: <<The court does not mean to imply that §230(c) never can provide protection from liability for entities like Armslist. But that protection is not, as Armslist has argued, a broad grant of immunity. It is a fact-based inquiry. For example, the Seventh Circuit affirmed the district court’s grant of Craigslist’s motion for judgment on the pleadings in Chi. Lawyers’ Comm. The court recounted that “[a]lmost in passing, ” the plaintiff had alleged that Craiglist was liable for violations of the Fair Housing Act because although it had not created the discriminatory posts, it had “caused” the discriminatory third-party posts to be made. Chi. Lawyers’ Comm., 519 F.3d at 671. Emphasizing that Craigslist was not the author of the discriminatory posts, the Seventh Circuit found that the only causal connection between Craigslist and the discriminatory posts was the fact that “no one could post a discriminatory ad if craiglist did not offer a forum.” Id. The court stated that “[n]othing in the service craigslist offers induces anyone to post any particular listing or express a preference for discrimination; for example, craigslist does not offer a lower price to people who include discriminatory statements in their postings.” Id. at 671-72. For that reason, the court concluded that “given § 230(c)(1) [the plaintiff] cannot sue the messenger just because the message reveals a third party’s plan to engage in unlawful discrimination.” Id. at 672.

The plaintiffs in this case have not raised claims of defamation or obscenity or copyright infringement—the types of claims that would require the court to determine whether Armslist is a “publisher” or “speaker” of content, rather than a provider of an interactive computer service that hosts content created by third parties. None of the nine claims in the second amended complaint challenge the content of ads posted on the Armslist.com website—not even Caldwell’s ad. The plaintiffs have alleged that Armslist should have structured the website differently—should have included safeguards and screening/monitoring provisions, should have been aware of the activity of individuals like Caldwell, should have implemented measures that would prevent illegal firearms dealers from using the website to sell guns without a license.

In declining to dismiss the complaint on §230(c) grounds, the court in Webber v. Armslist recently stated that because the plaintiff in that case had alleged “negligence and public nuisance based on Defendants’ affirmative conduct, ” it appeared that “§ 230 is not even relevant to this case.” Webber v. Armslist, No. 20-cv-1526, 2021 WL 5206580, at *6 (E.D. Wis. Nov. 9, 2021). This court agrees. Section 230 does not immunize Armslist from suit and the court will not dismiss the complaint on that basis.>>

2) Webber v. Armslist, del 9 novembre 2021, caso 20-C-1526, più dettagliata sul punto: <<But even if § 230 applies to this type of case, Plaintiff’s claims do not seek to treat Defendants as the “publisher or speaker” of the post in question. Here, Plaintiff seeks to hold Defendants liable for their “role in developing or co-developing [their] own content.” Dkt. No. 13 at 18. Specifically, Plaintiff faults Defendants for failing to prohibit criminals from accessing or buying firearms through Armslist.com; actively encouraging, assisting, and facilitating illegal firearms transactions through their various design decisions; failing to require greater details from users, such as providing credit-card verified evidence of users’ identities; failing to require that sellers certify under oath that they are legal purchasers; and failing to provide regularly updated information regarding applicable firearms laws to its users, among many other things. Compl. at ¶ 165. In essence, the complaint “focuses primarily on Armslist’s own conduct in creating the high-risk gun market and its dangerous features, ” not on the post in question. Dkt. No. 13 at 23. This type of claim, then, does not seek to treat Defendants as the “publisher or speaker” of the post that led to Schmidt’s killer obtaining a firearm; rather, it seeks to hold Defendants liable for their own misconduct in negligently and recklessly creating a service that facilitates the illegal sale of firearms. 47 U.S.C. § 230(c)(1). For these reasons, the Court concludes that § 230 does not immunize Defendants from liability in this case>>.

Viene però osservato dal prof. Eric Goldman (da cui ho tratto notizia e link alle sentenze),  che  la corte suprema del Wisconsin nel 2019 in Daniel v. Armslist aveva invece concesso il safe harbour