Trib. Roma sulla responsabilità del provider per materuiali caricati dagli utenti

Eleonora Rosati su IPKat ci notizia di (e ci linka a) due sentenza 2023 di Trib. Roma sez. spec. impr. sull’oggetto, entrambe tra RTI (attore) e una piattaforma di hosting files (Vimeo e V Kontacte).

Le domande sono respinte, alla luce del precedente della Corte di Giustizia Cyando del 2021.

Si tratta di :

Trib. Roma 07.04.2023 n. 5700/2023, RG 59780/2017, Giudice rel. Picaro, RTI  c. Vimeo;

Trib. Roma 12.10.2023 n. 14531/2023, RG 4341/2027, giud. rel.: Cavaliere, RTI v. V Kontakte;

Per Rosati la lettura del pcedente europeo è errata.

Qui io solo evidenzio che i) civilisticamente non ha dignità giuridica da noi la distinzine tra responsabilità primaria e secondaria/indiretta nel caso di materiali illeciti caricati dagli utenti e ii) il safe harbour copre ogni responsabilità da esso conseguente.

Il punto più importante è che, per perdere il safe harbour, bisogna che il provider avesse contezza dell’esistenza degli specifici illeciti azionati, non di una loro generica possibilità.

Altra questione poi è quella del livello di dettaglio della denuncia al provider da parte del titolare dei diritti.   Per il Trib. deve essere elevato: ed è  esatto, stante il principio per cui onus probandi incumbit ei qui dicit , regola processuale che va applicata anche alla denuncia de qua (nè c’è ragione per caricare il provider di attività faticose e incerte, a meno che tali non siano più per ragioni ad es. di avanzamento tecnologico).

Differenza tra non applicabilità dello safe harbour e affermazione di responsabilità

La corte distrettuale del Wisconsin -western dist.-  31.03.2023, caso No. 21-cv-320-wmc, Hopson + Bluetype c. Google + Does 1 e 2, ha ben chiara la differenza tra i due concetti: che non sia invocabile il  safe harbour non significa che ricorra in positivo responsabilità (anche se di fatto sarà probabile).

Non altrettanto chiara ce l’hanno alcuni nostri opininisti (dottrina e giurisprudenza).

Nel caso si trattava del safe harbour per il copyright in caso di procedura da notice and take down e in particolare da asserita vioalzione della procedura che avrebbe dovuto condurre google a “rimettere su” i materiali in precedenza “tirati giu” (§ 512.g) del DMCA).

<<Here, plaintiffs allege that defendant Google failed to comply with § 512(g)’s
strictures by: (1) redacting contact information from the original takedown notices; (2) failing to restore the disputed content within 10 to 14 business days of receiving plaintiffs’ counter notices; and (3) failing to forward plaintiffs’ counter notices to the senders of the takedown notices. As Google points out, however, its alleged failure to comply with  § 512(g) does not create direct liability for any violation of plaintiffs’ rights. It merely denies Google a safe harbor defense should plaintiffs bring some other claim against the ISP for removing allegedly infringing material, such as a state contract or tort law claim. Martin, 2017 WL 11665339, at *3-4 (§ 512(g) does not create any affirmative cause of action; it
creates a defense to liability); see also Alexander v. Sandoval, 532 U.S. 275, 286-87 (2001) (holding plaintiffs may sue under a federal statue only where there is an express or implied private right of action). So, even if Google did not follow the procedure entitling it to a safe harbor defense in this case, the effect is disqualifying it from that defense, not creating liability under § 512(g) of the DMCA for violating plaintiffs’ rights.>>

Ancora nulla circa tale procedura in UE: gli artt. 16-17 del DSA reg. UE 2022/2065 non ne parlano (pare lasciarla all’autonomia contrattuale) e nemmeno lo fa la dir. specifica sul copyright,  art. 17 dir. UE 790/2019.

(notizia e link dal sito del prof. Eric Goldman)

Youtube non è corresponsabile delle violazioni di copyright consistite in ripetuti upload sulla sua piattaforma

Youtube non è corresponsabile delle violazioni di copyright date da ripetuti upload sulla sua piattaforma. Così US distr. court southern district of Florida 16 maggio 2023, Case No. 21-21698-Civ-GAYLES/TORRES, Athos overseas ltc c. Youtube-Google.

domanda attorea:

According to Plaintiff, Defendants are liable under direct and secondary infringement theories for YouTube’s failure to prevent the systematic re-posting of Plaintiff’s copyrighted movies to its platform. Plaintiff contends that YouTube has turned a blind eye to rampant infringement of Athos’ copyrights by refusing to employ proprietary video-detection software to block or remove from its website potentially infringing clips, and not just clips specifically identified by URL in Plaintiff’s DMCA takedown notices. In essence, Plaintiff argues that evidence of YouTube’s advanced video detection software, in conjunction with the thousands of takedown notices Athos has tendered upon YouTube, give rise to genuine issues of fact as to whether Defendants have forfeited the DMCA’s safe harbor protections.

Domanda rigettata: il provider non pèerde il suo safe harbour ex 17 US code § 512 per assenbza dell’element soggettivo:

<<Indeed, in Viacom the Second Circuit rejected identical arguments to the ones asserted here by Athos, which were presented in a lawsuit brought by various television networks against YouTube for the unauthorized display of approximately 79,000 video clips that appeared on the website between 2005 and 2008. Viacom, 676 F.3d at 26. Among other things, the Viacom plaintiffs argued that the manner in which YouTube employed its automated video identification tools—including liming its access certain users—removed the ISP from the safe harbor. Id. at 40–41. Yet, the court unequivocally rejected plaintiffs’ arguments, holding that the invocation of YouTube’s technology as a source of disqualifying knowledge must be assessed in conjunction with the express mandate of § 512(m) that “provides that safe harbor protection cannot be conditioned on ‘a service provider monitoring its service or affirmatively seeking facts indicating infringing activity[.]’”9 Viacom, 676 F.3d at 41 (quoting 17 U.S.C. § 512(m)(1))>>

poi:

<<Plaintiff conflates two concepts that are separate and distinct in the context of YouTube’s copyright protection software: automated video matches and actual infringements. As explained by YouTube’s copyright management tools representative, software-identified video matches are not necessarily tantamount to  copyright infringements. [D.E. 137-7, 74:21–25]. Rather, the software detects code, audio, or visual cues that may match those of a copyrighted work, and presents those matches to the owner for inspection. Thus, while YouTube systems may be well equipped to detect video matches, the software does not necessarily have the capacity to detect copyright infringements. See id. Further, the accuracy of these automatically identified matches depends on a wide range of factors and variable. [Id. at 75:1–10, 108:2–110:17, 113:3–114:25]. That is why users, not YouTube, are required to make all determinations as to the infringing nature of software selected matches. [Id.].
Second, Plaintiff does not point to any evidence showing that YouTube, through its employees, ever came into contact, reviewed, or interacted in any way with any of the purportedly identified video matches for which Athos was allegedly required to send subsequent DMCA takedown notices (i.e., the clips-in-suit). As explained by YouTube’s product manager, the processes of uploading, fingerprinting, scanning, and identifying video matches is fully automated, involving minimal to no human interaction in the part of YouTube. [Id. at 68:22–69:18, 118:17–119]. The record shows that upon upload of a video to YouTube, a chain of algorithmic processes is triggered, including the automated scanning and matching of potentially overlapping content. If the software detects potential matches, that list of matches is automatically directed towards the copyright owner, by being displayed inside the user’s YouTube interface. [Id. at 68:22–70:25]. Therefore, the record only reflects that YouTube does not rely on human involvement during this specific phase of the scanning and matching detection process, and Plaintiff does not proffer any evidence showing otherwise>>.

sintesi:

<<As the relevant case law makes clear, evidence of the technologies that ISPs independently employ to enhance copyright enforcement within their system cannot form the basis for ascribing disqualifying knowledge of unreported infringing items to the ISP. Such a conception of knowledge would contradict the plain mandate of § 512(m), “would eviscerate the required specificity of notice[,] . . . and would put the provider to the factual search forbidden by § 512(m).” Viacom, 718 F. Supp. 2d at 528. Thus, we find that Athos’ theory that specific knowledge of non-noticed infringing clips can be ascribed to Defendants by virtue of YouTube’s copyright management tools fails as a matter of law>>.

Notizia e link alla sentenza dal blog del prof Eric Goldman

Azione contrattuale contro Facebook parzialmente protetta dal safe harbour ex § 230 CDA

Il distreto nord della Californa 21 .09.2022 , CASE NO. 22-cv-02366-RS, Shared.com c. Meta, affronta il tema della invocabilità del porto sicuro  ex § 230 CDA nel caso venga azionata responsabilità contrattuale di tipo editoriale del PROVIDER per materiali non propri.

Nel caso ricorreva anche un contratto di pubblicità dell’utente con  Facebook, tipo assai diffuso e  al centro delle vendite digitali odierne.

Fatti: << Shared is a partnership based out of Ontario, Canada that “creates and publishes original,
timely, and entertaining [online] content.” Dkt. 21 ¶ 9. In addition to its own website, Plaintiff also
operated a series of Facebook pages from 2006 to 2020. During this period, Shared avers that its
pages amassed approximately 25 million Facebook followers, helped in part by its substantial
engagement with Facebook’s “advertising ecosystem.” This engagement occurred in two ways.
First, Shared directly purchased “self-serve ads,” which helped drive traffic to Shared.com and
Shared’s Facebook pages.

Second, Shared participated in a monetization program called “Instant
Articles,” in which articles from Shared.com would be embedded into and operate within the
Facebook news feed; Facebook would then embed ads from other businesses into those articles
and give Shared a portion of the ad revenue. Shared “invested heavily in content creation” and
retained personnel and software specifically to help it maximize its impact on the social media
platform.
Id. ¶ 19.
Friction between Shared and Facebook began in 2018. Shared states that it lost access to
Instant Articles on at least three occasions between April and November of that year. Importantly,
Shared received no advance notice that it would lose access. This was contrary to Shared’s averred
understanding of the Facebook Audience Network Terms (“the FAN Terms”), which provide that
“[Facebook] may change, withdraw, or discontinue [access to Instant Articles] in its sole
discretion and [Facebook] will use good faith efforts to provide Publisher with notice of the
same.”
Id. ¶ 22; accord Dkt. 21-5. Shared asserts that “notice,” as provided in the FAN Terms,
obliges Facebook to provide
advance notice of a forthcoming loss of access, rather than after-thefact notice. (…)>.

Facebook (F.) poi sospese l’account e impedì il funzionamento del programma di advertisment

Alla domanda giudiziale, F. (anzi Meta) si difende preliminarmente con il safe harbour, quale decisione editoriale e quindi libera:

LA CORTE: << Defendant is only partially correct. Plaintiff raises three claims involving Defendant’s
decision to suspend Plaintiff’s access to its Facebook accounts and thus “terminate [its] ability to
reach its followers”: one for conversion, one for breach of contract, and one for breach of the
implied covenant of good faith and fair dealing.
See Dkt. 21, ¶¶ 54–63, 110–12, 119. Shared
claims that, contrary to the Facebook Terms of Service, Defendant suspended Shared’s access to
its Facebook pages without first determining whether it had “clearly, seriously or repeatedly
breached [Facebook’s] Terms or Policies>>.

E poi: << At bottom, these claims seek to hold Defendant liable
for its decision to remove third-party content from Facebook. This is a quintessential editorial
decision of the type that is “perforce immune under section 230.”
Barnes, 570 F.3d at 1102
(quoting
Fair Housing Council of San Fernando Valley v. Roommates.com, 521 F.3d 1157, 1170–
71 (9th Cir. 2008) (en banc)). Ninth Circuit courts have reached this conclusion on numerous
occasions.
See, e.g., King v. Facebook, Inc., 572 F. Supp. 3d 776, 795 (N.D. Cal. 2021); Atkinson
v. Facebook Inc.
, 20-cv-05546-RS (N.D. Cal. Dec. 7, 2020); Fed. Agency of News LLC v.
Facebook, Inc.
, 395 F. Supp. 3d 1295, 1306–07 (N.D. Cal. 2019). To the extent Facebook’s Terms
of Service outline a set of criteria for suspending accounts (i.e., when accounts have “clearly,
seriously, or repeatedly” breached Facebook’s policies), this simply restates Meta’s ability to
exercise editorial discretion. Such a restatement does not, thereby, waive Defendant’s section
230(c)(1) immunity.
See King, 572 F. Supp. 3d at 795. Allowing Plaintiff to reframe the harm as
one of lost data, rather than suspended access, would simply authorize a convenient shortcut
through section 230’s robust liability limitations by way of clever pleading. Surely this cannot be
what Congress would have intended. As such, these claims must be dismissed.
>>

In breve, che i materiali di cui si contesta la rimozione siano dell’attore/contraente (anzichè di un utente terzo come nei più frequenti casi di diffamazione), nulla sposta: il safe harbour sempre si applica, ricorrendo i requisiti previsti dal § 230 CDA

La violazione contrattuale è coperta da safe harbour editoriale ex § 230 CDA?

La questione è sfiorata dalla Appellate Division di New York 22.03.2022, 2022 NY Slip Op 01978, Word of God Fellowship, Inc. v Vimeo, Inc., ove l’attore agisce c. Vimeo dopo aver subito la rimozione di video perchè confusori sulla sicurezza vaccinale.

L’importante domanda trova a mio parere risposta negativa: la piattaforma non può invocare il safe harbour se viola una regola contrattuale che si era assunta liberamente.

Diverso è se, come nel caso de quo, il contratto di hosting preveda la facoltà di rimuovere: ma allora il diritto di rimozione ha base nel contratto e non nell’esimente da safe harbour

(notizia della sentenza e link dal blog del prof. Eric Goldman)

Safe harbour ex 230 CDA per l’omesso avviso e l’omessa rimozione di materiale sensibile? Si.

La madre di un bambino le cui immagini sessualmente significative avevva notato caricate su Tikl Tok cita la piattaforma per i segg. illeciti: did not put any warning on any of the videos claiming they might contain sensitive material; did not remove any of the videos from its platform; did not report the videos to any child abuse hotline; did not sanction, prevent, or discourage the videos in any way from being viewed, shared, downloaded or disbursed in any other way; and “failed to act on their own policies and procedures along with State and Federal Statutes and Regulations.

Il distretto nord dell’Illinois, west. division, 28.02.2022, Case No. 21 C 50129, Day c. Tik Tok, accoglie l’eccezione di safe harbour ex § 230 CDA sollevata dalla piattaforma (e citando il noto precedente Craiglist del 2008):

What § 230(c)(1) says is that an online information system must not ‘be treated as the publisher or speaker of any information provided by’ someone else.” Chicago Lawyers’ Committee for Civil Rights Under Law, Inc. v. Craigslist, Inc., 519 F.3d 666, 671 (7th Cir. 2008).
In Chicago Lawyers’, plaintiff sought to hold Craigslist liable for postings made by others on its platform that violated the anti-discrimination in advertising provision of the Fair Housing Act (42 U.S.C. § 3604(c)). The court held 47 U.S.C. § 230(c)(1) precluded Craigslist from being  liable for the offending postings because “[i]t is not the author of the ads and could not be treated as the ‘speaker’ of the posters’ words, given § 230(c)(1).” Id. The court rejected plaintiff’s argument that Craigslist could be liable as one who caused the offending post to be made stating “[a]n interactive computer service ‘causes’ postings only in the sense of providing a place where people can post.” Id. “Nothing in the service craigslist offers induces anyone to post any particular listing or express a preference for discrimination.” Id. “If craigslist ‘causes’ the discriminatory notices, then, so do phone companies and courier services (and, for that matter, the firms that make the computers and software that owners use to post their notices online), yet no one could think that Microsoft and Dell are liable for ‘causing’ discriminatory advertisements.” Id. at 672. The court concluded the opinion by stating that plaintiff could use the postings on Craigslist to identify targets to investigate and “assemble a list of names to send to the Attorney General for prosecution. But given § 230(c)(1) it cannot sue the messenger just because the message reveals a third party’s plan to engage in unlawful discrimination.”

Ed allora la domanda attorea nel caso specifico < does not allege defendant created or posted the videos. It only alleges defendant allowed and did not timely remove the videos posted by someone else. This is clearly a complaint about “information provided by another information content provider” for which defendant cannot be held liable by the terms of Section 230(c)(1).>

Difficile dar torto alla corte, alla luce del dettato della disposizione invocata da TikTok

(notizia e link alla sentenza dal blog del prof. Eric Goldman)

Google non è responsabile per la presenza di app ad uso illecito nel suo play-store, stante il safe harbour ex 230 CDA

Un ex ambasciatore statunitense, di religione ebraica, chiede l’accetameonto di responsabilità di Google perchè permette la presenza sul PlayStore di un social (Telegram) notoriamente usato -anche- da estremisti autori di propaganda antisemita.

In particollare afferma che G. non fa rispetare la propria policy vincolante i creatori di app sullo  Store.

La corte californiana U.S. D C NORTHERN DISTRICT OF CALIFORNIA , SAN JOSE DIVISION, Case No. 21-cv-00570-BLF, Ginsberg c .Google, 18.02.2022, però ,accoglie l’eccezione di safe harbour ex 230 CDA sollevata da Google.

Dei tre requisiti chiesti allo scopo (che sia un service provider; che sia chiamato come Publisher; che si tratti di informazione di terzi), è il secondo quello di solito più litigato.

Ma giustamente la corte lo ravvisa anche in questo caso: <<In the present case, Plaintiffs’ claims are akin to the negligence claim that the Barnes court found to be barred by Section 230. Plaintiffs’ theory is that by creating and publishing guidelines for app developers, Google undertook to enforce those guidelines with due care, and can be liable for failing to do so with respect to Telegram. As in Barnes, however, the undertaking that Google allegedly failed to perform with due care was removing offending content from the Play Store.
But removing content is something publishers do, and to impose liability on the basis of such conduct necessarily involves treating the liable party as a publisher of the content it failed to remove. Barnes, 570 F.3d at 1103. Plaintiffs in the present case do not allege the existence of a contract or indeed any interaction between themselves and Google. Plaintiffs do not allege that Ambassador Ginsberg purchased his smartphone from Google or that he downloaded Telegram or any other app from the Play Store. Thus, the Barnes court’s rationale for finding that Section 230 did not bar Barnes’ promissory estoppel claim is not applicable here.
>>

(notizia a link alla sentenza dal blog del prof. Eric Goldman)

Safe harbour per Youtube circa la diffusione di immagini di persona fisica

La corte di Dallas 17.05.21, KANDANCE A. WELLS c. Youtube, civil action No. 3:20-CV-2849-S-BH, decide una domanda giudiziale risarcitoria (per dollari 504.000,00) basata sulla illecita diffusione (da parte di terzi utenti) della propria immagine, finalizzata alla minacaccia personale.

Diverse erano le leggi invocate come violate.

Immancabilmente Y. eccepisce il safe harbour ex § 230 CDA , unico aspetti qui esaminato.

La corte accoglie l’eccezione e giustamente.

Esamina i consueti tre requisiti e come al solito il più interssante è il terzo (che la domanda tratti il convenuto come publisher o speaker): <<Plaintiff is suing Defendant for “violations to [her] personal safety as a generalconsumer” under the CPSA, the FTCA, and the “statutes preventing unfair competition, deceptiveacts under tort law, and/or the deregulation of trade/trade practices” based on the allegedlyderogatory image of her that is posted on Defendant’s website. (See doc. 3 at 1.) All her claimsagainst Defendant treat it as the publisher of that image. See, e.g., Hinton, 72 F. Supp. 3d at 690(quoting MySpace, 528 F.3d at 418) (“[T]he Court finds that all of the Plaintiff’s claims againsteBay arise or ‘stem[ ] from the [ ] publication of information [on www.ebay.com] created by thirdparties….’”); Klayman, 753 F.3d at 1359 (“[I]ndeed, the very essence of publishing is making thedecision whether to print or retract a given piece of content—the very actions for which Klaymanseeks to hold Facebook liable.”). Accordingly, the third and final element is satisfied>>.

(notizia e link alla sentenza dal blog di Eric Goldman)

Ancora su safe harbour ex § 230 CDA e Twitter

Una modella (M.) si accorge di alcune sua foto intime pubblicate su Twitter (T.) da un soggetto editoriale (E.) di quel settore.

Chiede pertanto a T. la rimozione delle foto, dei tweet e la sospensione dell’account.

T. l’accontenta solo sul primo punto.

Allora M. agisce verso T. e E. , azionando:  <<(1) copyright infringement; (2) a violation of FOSTA-SESTA, 18 U.S.C. 1598 (named for the Allow States and Victims to Fight Online Sex Trafficking Act and Stop Online Sex Trafficking Act bills); (3) a violation of the right of publicity under Cal. Civ. Code § 3344; (4) false advertising under the Lanham Act; (5) false light invasion of privacy; (6) defamation, a violation under Cal. Civ. Code § 44, et seq.; (7) fraud in violation of California’s Unfair Competition Law, Cal. Bus. & Prof. Code § 1 17200 et seq.; (8) negligent and intentional infliction of emotional distress; and (9) unjust enrichment>>

Decide  US D.C. Central District della California 19.02.2021 , caso CV 20-10434-GW-JEMx,  Morton c. Twitter+1.

Manco a dirlo T eccepisce l’esimente ex § 230 CDA per tutti i claims tranne quello di copyright.

E’  sempre problematico il requisito del se l’attore tratti o meno il convenuto come publisher o speaker: conta la sostanza, non il nome adoperato dall’attore. Cioè la domanda va qualfiicata dal giudice, p. 5.

M. cerca di dire che E. non è terzo ma affiliato a T. . La corte rigetta, anche se di fatto senza motivazione, pp.-5-6. Anche perchè sarebbe stato più appropriato ragionarci sul requisito del se sia materiale di soggetto “terzo”, non sul se sia trattato come publisher.

IL punto più interessante è la copertura col § 230 della domanda contrattuale, 7 ss.

M. sostiene di no: ma invano perchè la corte rigetta per safe harbour e per due ragioni, p. 7/8:

Primo perchè M. non ha indicato una clausola contrattuale  che obbligasse T. a sospendere gli account offensivi: la clausola c’è, ma è merely aspirational, non binding.

Secondo , perchè la richiesta di sospendere l’account implica decisione editoriale, per cui opera l’esimente: <<“But removing content is something publishers do, and to impose liability on the basis of such conduct necessarily involves treating the liable party as a publisher of the content it failed to remove.” Barnes, 570 F.3d at 1103 (holding that Section 230 barred a negligent-undertaking claim because “the duty that Barnes claims Yahoo violated derives from Yahoo’s conduct as a publisher – the steps it allegedly took, but later supposedly abandoned, to de-publish the offensive profiles”)>>, p .8.

E’ il punto teoricamente più  interessante: la condotta censurata costituisce al tempo stesso sia  (in-)adempimento contrattuale che decisione editoriale. Le due qualificazione si sovrappongono.

Invece la lesione dell’affidamento  (promissory estoppel) non è preclusa dall’esimente, per cui solo qui M. ottiene ragione: <<This is because liability for promissory estoppel is not necessarily for behavior that is identical to publishing or speaking (e.g., publishing defamatory material in the form of SpyIRL’s tweets or failing to remove those tweets and suspend the account). “[P]romising . . . is not synonymous with the performance of the action promised. . . . one can, and often does, promise to do something without actually doing it at the same time.” Barnes, 570 F.3d at 1107. On this theory, “contract liability would come not from [Twitter]’s publishing conduct, but from [Twitter]’s manifest intention to be legally obligated to do something, which happens to be removal of material from publication.” Id. That manifested intention “generates a legal duty distinct from the conduct at hand, be it the conduct of a publisher, of a doctor, or of an overzealous uncle.” Id>>

(sentenze e link dal blog di Eric Goldman)

Sito-bacheca di annunci e esenzione ex § 230 CDA

Un sito web, che ospiti annunci illeciti (nel caso: sfruttamento di minori), può appellarsi all’esenzione da responsabilità posta dal § 230 CDA (communication decency act) statunitense?

Si, secondo la U.S. Dist. Court-NORTHERN DISTRICT OF CALIFORNIA, 20 agosto 2020, J.B. c. Craiglist e altri, Ccse No. 19-cv-07848-HSG .

Il punto è trattato nella parte III.A, p . 5-11.

Ricorrono infatti  tre requisiti della relativa fattispecie:  << ‘(1) a provider or user of an interactive computer service (2) whom a plaintiff seeks to treat, under a state law cause of action, as a publisher or speaker (3) of information provided by another information content provider.’”>>, P. 5.

Il punto più interssante (ma non da noi, trattandosi di normativa solo statunitense) è il coordinamento del § 230 CDA con la novella 2018 c.d. SESTA-FOSTA o solo FOSTA:  Fight Online Sex Trafficking Act (“FOSTA”) and Stop Enabling Sex Traffickers (“SESTA”) , ivi, p. 6/7. Novella che va ad aggiungere il punto (5) alla lettera (e) del § 230.

L’attrice sostiene che detta novella toglie il safe harbour a Craiglist in relazione alle violazioni dedotte.   Il giudice però la pensa all’opposto, pp. 8-10.

All’attrice va male pure con l’ultima difesa, consistente nel ritenere Craiglist sia un “content provider”. Ha infatti buon gioco la Corte nel rigettarla, dal momento che Craiglist si limitava ad ospitare contenuti in toto prodotti da terzi. Viene all’uopo richiamato la decisione Fair Hous. Council of San Fernando Valley v. Roommates.Com, LLC, 521 F.3d 1157, 1162 (9th Cir. 2008), importante perchè richiamato un pò da tutti quelli che si occupano di responsabilità delle piattaforme

Altro discorso è quello della presenza di eventuali segnali di allarme della illiceità, che avrebbero dovuto indurre cautela (se non azioni positive di contrasto ) in Craiglist (red flags di vario tipo, diffide etc.). Tuttavia l’evenienza non è regolata nel § 230 CDA , a differenza dal safe harbour per il copyright (§ 512 DMCA) e a differenza pure dalla nostra disciplina nazional-europea.

(notizia e link alla sentenza presi dal blog di Eric Goldman)