Discriminazione algoritmica da parte del marketplace di Facebook e safe harbour ex § 230 CDA

Il prof. Eric Goldman segnala l’appello del 9 circuito 20.06.2023, No. 21-16499, Vargas ed altri c. Facebook , in un caso di allegata discriminazione nel proporre offerte commerciali sul suo marketplace –

La domanda: <<The operative complaint alleges that Facebook’s “targeting methods provide tools to exclude women of color, single parents, persons with disabilities and other protected attributes,” so that Plaintiffs were “prevented from having the same opportunity to view ads for housing” that Facebook users who are not in a protected class received>>.

Ebbene, il safe harbour non si applica perchè Facebook non è estraneo ma coautore della condotta illecita, in quanto cretore dell’algoritmo utilizzato nella pratica discriminatoria:

<<2. The district court also erred by holding that Facebook is immune from liability pursuant to 47 U.S.C. § 230(c)(1). “Immunity from liability exists for ‘(1) a provider or user of an interactive computer service (2) whom a plaintiff seeks to treat, under a [federal or] state law cause of action, as a publisher or speaker (3) of information provided by another information content provider.’” Dyroff v. Ultimate Software Grp., 934 F.3d 1093, 1097 (9th Cir. 2019) (quoting Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1100 (9th Cir. 2009)). We agree with Plaintiffs that, taking the allegations in the complaint as true, Plaintiffs’ claims challenge Facebook’s conduct as a co-developer of content and not merely as a publisher of information provided by another information content provider.
Facebook created an Ad Platform that advertisers could use to target advertisements to categories of users. Facebook selected the categories, such as sex, number of children, and location. Facebook then determined which categories applied to each user. For example, Facebook knew that Plaintiff Vargas fell within the categories of single parent, disabled, female, and of Hispanic descent. For some attributes, such as age and gender, Facebook requires users to supply the information. For other attributes, Facebook applies its own algorithms to its vast store of data to determine which categories apply to a particular user.
The Ad Platform allowed advertisers to target specific audiences, both by including categories of persons and by excluding categories of persons, through the use of drop-down menus and toggle buttons. For example, an advertiser could choose to exclude women or persons with children, and an advertiser could draw a boundary around a geographic location and exclude persons falling within that location. Facebook permitted all paid advertisers, including housing advertisers, to use those tools. Housing advertisers allegedly used the tools to exclude protected categories of persons from seeing some advertisements.
As the website’s actions did in Fair Housing Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157 (9th Cir. 2008) (en banc), Facebook’s own actions “contribute[d] materially to the alleged illegality of the conduct.” Id. at 1168. Facebook created the categories, used its own methodologies to assign users to the categories, and provided simple drop-down menus and toggle buttons to allow housing advertisers to exclude protected categories of persons. Facebook points to three primary aspects of this case that arguably differ from the facts in Roommates.com, but none affects our conclusion that Plaintiffs’ claims challenge Facebook’s own actions>>.

Ed ecco le tre eccezioni di Facebook e relative motivazioni di rigetto del giudice:

<<First, in Roommates.com, the website required users who created profiles to self-identify in several protected categories, such as sex and sexual orientation. Id. at 1161. The facts here are identical with respect to two protected categories because Facebook requires users to specify their gender and age. With respect to other categories, it is true that Facebook does not require users to select directly from a list of options, such as whether they have children. But Facebook uses its own algorithms to categorize the user. Whether by the user’s direct selection or by sophisticated inference, Facebook determines the user’s membership in a wide range of categories, and Facebook permits housing advertisers to exclude persons in those categories. We see little meaningful difference between this case and Roommates.com in this regard. Facebook was “much more than a passive transmitter of information provided by others; it [was] the developer, at least in part, of that information.” Id. at 1166. Indeed, Facebook is more of a developer than the website in Roommates.com in one respect because, even if a user did not intend to reveal a particular characteristic, Facebook’s algorithms nevertheless ascertained that information from the user’s online activities and allowed advertisers to target ads depending on the characteristic.
Second, Facebook emphasizes that its tools do not require an advertiser to discriminate with respect to a protected ground. An advertiser may opt to exclude only unprotected categories of persons or may opt not to exclude any categories of persons. This distinction is, at most, a weak one. The website in Roommates.com likewise did not require advertisers to discriminate, because users could select the option that corresponded to all persons of a particular category, such as “straight or gay.” See, e.g., id. at 1165 (“Subscribers who are seeking housing must make a selection from a drop-down menu, again provided by Roommate[s.com], to indicate whether they are willing to live with ‘Straight or gay’ males, only with ‘Straight’ males, only with ‘Gay’ males or with ‘No males.’”). The manner of discrimination offered by Facebook may be less direct in some respects, but as in Roommates.com, Facebook identified persons in protected categories and offered tools that directly and easily allowed advertisers to exclude all persons of a protected category (or several protected categories).
Finally, Facebook urges us to conclude that the tools at issue here are “neutral” because they are offered to all advertisers, not just housing advertisers, and the use of the tools in some contexts is legal. We agree that the broad availability of the tools distinguishes this case to some extent from the website in Roommates.com, which pertained solely to housing. But we are unpersuaded that the distinction leads to a different ultimate result here. According to the complaint, Facebook promotes the effectiveness of its advertising tools specifically to housing advertisers. “For example, Facebook promotes its Ad Platform with ‘success stories,’ including stories from a housing developer, a real estate agency, a mortgage lender, a real estate-focused marketing agency, and a search tool for rental housing.” A patently discriminatory tool offered specifically and knowingly to housing advertisers does not become “neutral” within the meaning of this doctrine simply because the tool is also offered to others>>.

Non c’è responsabilità di Amazon per la vendita di nitrato di sodio usato poi per suicidio

Interessante pronuncia (in un tragica fattispecie) da parte di West. Dist. di Washington at Seattle 27 giugno 2023, CASE NO. C23-0263JLR, Mccarthy v. Amazon:

<<the Sodium Nitrite was not defective, and that Amazon thus did not owe a duty to warn…the Sodium Nitrite’s warnings were sufficient because the label identified the product’s general dangers and uses, and the dangers of ingesting Sodium Nitrite were both known and obvious. The allegations in the amended complaint establish that Kristine and Ethan deliberately sought out Sodium Nitrite for its fatal properties, intentionally mixed large doses of it with water, and swallowed it to commit suicide….the risk associated with intentionally ingesting a large dose of an industrial grade chemical is also obvious…In this case, the danger was particularly obvious because the Sodium Nitrite “was not marketed as safe for human consumption or ingestion,” and appears to have been categorized as “Business, Industrial, and Scientific Supplies”…
given Kristine and Ethan’s knowledge regarding the dangers of ingesting Sodium Nitrite as well as the general warnings provided on the bottle and the obvious dangers associated with ingesting industrial-grade chemicals, the court concludes that the Sodium Nitrite’s warnings were not defective. Amazon therefore had no duty to provide additional warnings regarding the dangers of ingesting Sodium Nitrite…
even if Amazon owed a duty to provide additional warnings as to the dangers of ingesting sodium nitrite, its failure to do so was not the proximate cause of Kristine and Ethan’s deaths…Kristine and Ethan sought the Sodium Nitrite out for the purpose of committing suicide and intentionally subjected themselves to the Sodium Nitrite’s obvious and known dangerous and those described in the warnings on the label. Plaintiffs do not plausibly allege that better warnings from Amazon would have discouraged Ethan and Kristine from ingesting sodium nitrite>>.

L’aver tolto le recensioni non aiuta gli attori, ai quali viene oppostao con successo il safas harbour ex § 230 CDA, p. 19 ss.

(brano citato tratto dal post del prof. Eric Goldman nel suo blog)

Il 230 CDA salva Amazon dall’accusa di corresponsabile di recensioni diffamatorie contro un venditore del suo marketplace

La recensione diffamatoria (lievemente, per vero: sciarpa Burberry’s asseritamenye non autentica) non può vedere Amazon correposanbilòe perchè oepra il safe harbour citato.

Si tatta di infatti proprio del ruolo di publisher/speaker previsto dala legge. Nè può ravvisarsi un contributo attivo di Amazon  nell’aver stabilito le regole della sua piattaforma, come vorrebbe il diffamato: il noto caso Roommates è malamente invocato.

Caso alquanto facile.

Così l‘appello del 11 circuito 12 giugn 2023, No. 22-11725,  MxCall+1 c. Zotos + Amazon:

<<In that case, Roommates.com published a profile page for each subscriber seeking housing on its website. See id. at 1165. Each profile had drop-down menu on which subscribers seeking housing had to specify whether there are currently straight males, gay males, straight females, or lesbians living at the dwelling. This information was then displayed on the website, and Room-mates.com used this information to channel subscribers away from the listings that were not compatible with the subscriber’s prefer-ences. See id. The Ninth Circuit determined that Roommates.com was an information content provider (along with the subscribers seeking housing on the website) because it helped develop the in-formation at least in part. Id. (“By requiring subscribers to provide the information as a condition of accessing its service, and by providing a limited set of prepopulated answers, Room-mate[s.com] . . . becomes the developer, at least in part, of that in-formation.”).
Roommates.com is not applicable, as the complaint here al-leges that Ms. Zotos wrote the review in its entirety. See generally D.E. 1. Amazon did not create or develop the defamatory review even in part—unlike Roommates.com, which curated the allegedly discriminatory dropdown options and required the subscribers to choose one. There are no allegations that suggest Amazon helped develop the allegedly defamatory review.
The plaintiffs seek to hold Amazon liable for failing to take down Ms. Zotos’ review, which is exactly the kind of claim that is immunized by the CDA—one that treats Amazon as the publisher of that information. See 47 U.S.C. § 230(c)(1). See also D.E. 1 at 5 (“Amazon . . . refused to remove the libelous statements posted by Defendant Zotos”). “Lawsuits seeking to hold a service provider [like Amazon] liable for its exercise of a publisher’s traditional edi-torial functions—such as deciding whether to publish, withdraw, postpone, or alter content—are barred.” Zeran, 129 F.3d at 330. We therefore affirm the dismissal of the claims against Amazon>>.

(notizia e link dal sito del prof. Eric Goldman)

Differenza tra non applicabilità dello safe harbour e affermazione di responsabilità

La corte distrettuale del Wisconsin -western dist.-  31.03.2023, caso No. 21-cv-320-wmc, Hopson + Bluetype c. Google + Does 1 e 2, ha ben chiara la differenza tra i due concetti: che non sia invocabile il  safe harbour non significa che ricorra in positivo responsabilità (anche se di fatto sarà probabile).

Non altrettanto chiara ce l’hanno alcuni nostri opininisti (dottrina e giurisprudenza).

Nel caso si trattava del safe harbour per il copyright in caso di procedura da notice and take down e in particolare da asserita vioalzione della procedura che avrebbe dovuto condurre google a “rimettere su” i materiali in precedenza “tirati giu” (§ 512.g) del DMCA).

<<Here, plaintiffs allege that defendant Google failed to comply with § 512(g)’s
strictures by: (1) redacting contact information from the original takedown notices; (2) failing to restore the disputed content within 10 to 14 business days of receiving plaintiffs’ counter notices; and (3) failing to forward plaintiffs’ counter notices to the senders of the takedown notices. As Google points out, however, its alleged failure to comply with  § 512(g) does not create direct liability for any violation of plaintiffs’ rights. It merely denies Google a safe harbor defense should plaintiffs bring some other claim against the ISP for removing allegedly infringing material, such as a state contract or tort law claim. Martin, 2017 WL 11665339, at *3-4 (§ 512(g) does not create any affirmative cause of action; it
creates a defense to liability); see also Alexander v. Sandoval, 532 U.S. 275, 286-87 (2001) (holding plaintiffs may sue under a federal statue only where there is an express or implied private right of action). So, even if Google did not follow the procedure entitling it to a safe harbor defense in this case, the effect is disqualifying it from that defense, not creating liability under § 512(g) of the DMCA for violating plaintiffs’ rights.>>

Ancora nulla circa tale procedura in UE: gli artt. 16-17 del DSA reg. UE 2022/2065 non ne parlano (pare lasciarla all’autonomia contrattuale) e nemmeno lo fa la dir. specifica sul copyright,  art. 17 dir. UE 790/2019.

(notizia e link dal sito del prof. Eric Goldman)

Lo studente che lascia diffamare i docenti dando le credenziali del social a suoi amici, autori dei post, non è protetto dal safe harbor ex 230 CDA

L’appello del 6° circuito, n° 22-1748, JASON KUTCHINSKI c. FREELAND COMMUNITY SCHOOL DISTRICT; MATTHEW A. CAIRY and TRACI L. SMITH , decide una lite promossa dall’alunno impugnante la sanzione disciplinare irrogatagli per aver dato le credenziali Instagram ad amici , autori di post diffamatori di docenti della scuola.

L’alunno non è infatti qualificabile come publisher o spealker, essendo invece coautore della condotta dannosa:

<<Like the First, Fourth, and Ninth Circuits, we hold that when a student causes, contributes to, or affirmatively participates in harmful speech, the student bears responsibility for the harmful speech. And because H.K. contributed to the harmful speech by creating the Instagram account, granting K.L. and L.F. access to the account, joking with K.L. and L.F. about their posts, and accepting followers, he bears responsibility for the speech related to the Instagram account.
Kutchinski disagrees and makes two arguments. First, Kutchinski argues that Section 230 of the Communications Decency Act, 47 U.S.C. § 230, bars Defendants from disciplining H.K. for the posts made by K.L. and L.F.     This is incorrect. Under § 230(c)(1), “[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” To the extent § 230 applies, we do not treat H.K. as the “publisher or speaker” of the posts made by K.L. and L.F. Instead, we have found that H.K. contributed to the harmful speech through his own actions>>.

Che poi aggiunge:

<<Second, Kutchinski argues that disciplining H.K. for the posts emanating from the Instagram account violates H.K.’s First Amendment freedom-of-association rights. “The First Amendment . . . restricts the ability of the State to impose liability on an individual solely because of his association with another.” NAACP v. Claiborne Hardware Co., 458 U.S. 886, 918–19 (1982). “The right to associate does not lose all constitutional protection merely because some members of the group may have participated in conduct or advocated doctrine that itself is not protected.” Id. at 908. But Defendants did not discipline H.K. because he associated with K.L. and L.F. They determined that H.K. jointly participated in the wrongful behavior. Thus, Defendants did not impinge on H.K.’s freedom-of-association rights>>.

(notizia e link alla sentenza dal blog del prof. Eric Goldman)

Google è protetto dal safe harbour ex 230 CDA poer truffa da parte di un falso inserzionista (falso eBay)

La US distr. court -South. Dis. of NY Case 1:22-cv-06831-JGK, Ynfante v. Google su un caso semplice del safe harbour ex § 230 CDA:

<<In this case, it is plain that Section 230 protects Google from liability in the negligence and false advertising action brought by Mr. Ynfante. First, Google is the provider of an interactive computer service. The Court of Appeals for the Second Circuit has explained that “search engines fall within this definition,” LeadClick Media, 838 F.3d at 174, and Google is one such search engine. See, e.g., Marshall’s Locksmith Serv. Inc. v. Google, LLC, 925 F.3d 1263, 1268 (D.C. Cir. 2019) (holding that the definition of “interactive computer service” applies to Google specifically).
Second, there is no doubt that the complaint treats Google as the publisher or speaker of information. See, e.g., Compl. ¶¶ 27, 34. Section 230 “specifically proscribes liability” for “decisions relating to the monitoring, screening, and deletion of content from [a platform] — actions quintessentially related to a publisher’s role.” Green v. Am. Online (AOL), 318 F.3d 465, 471 (3d Cir. 2003). In other words, Section 230 bars any claim that “can be boiled down to the failure of an interactive computer service to edit or block user-generated content that it believes was tendered for posting online, as that is the very activity Congress sought to immunize by passing the section.” Fair Hous. Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157, 1172 n.32 (9th Cir. 2008). In this case, the plaintiff’s causes of action against Google rest solely on the theory that Google did not block a third-party advertisement for publication on its search pages. But for Google’s publication of the advertisement, the plaintiff would not have been harmed. See, e.g., Compl. ¶¶ 38-39, 61. The plaintiff therefore seeks to hold Google liable for its actions related to the screening, monitoring, and posting of content, which fall squarely within the exercise of a publisher’s role and are therefore subject to Section 230’s broad immunity.
Third, the scam advertisement came from an information content provider distinct from the defendant. As the complaint acknowledges, the advertisement was produced by a third party who then submitted the advertisement to Google for publication. See id. ¶ 26. It is therefore plain that the complaint is seeking to hold the defendant liable for information provided by a party other than the defendant and published on Google’s platform, which Section 230 forecloses>>

Niente di nuovo.

(notizia e link alla sentenza dal blog del prof. Eric Goldman)

L’embedding non costituisce comunicazione al pubblico però non permette la difesa del safe harbour ex § 512DMCA

Il giudice Barlow della Utah District Court, 2 maggio 2023, caso 2:21-cv-00567-DBB-JCB, decide un’interessante lite sull’embedding.

Attore è il gestore dei diritti su alcune foto eseguite da Annie Leibovitz. Convenuti sono i gestori di un sito che le aveva “riprodotte” con la tecnica dell’embedding (cioè non con riproduzine stabile sul proprio server).

Il giudice applica il c.d server test del noto caso Perfect 10 Inc. v. Google  del 2006 così sintetizzato: <<Perfect 10, the Ninth Circuit addressed whether Google’s unauthorized display of thumbnail and full-sized images violated the copyright holder’s rights. The court first defined an image as a work “that is fixed in a tangible medium of expression . . . when embodied (i.e., stored) in a computer’s server (or hard disk, or other storage device).” The court defined “display” as an individual’s action “to show a copy . . ., either directly or by means of a film, slide, television image, or any other device or process ….”>>.

Quindi rigetta la domanda nel caso dell’embedding sottopostogli :

<<The court finds Trunk Archive’s policy arguments insufficient to put aside the “server” test. Contrary to Trunk Archive’s claims, “practically every court outside the Ninth Circuit” has not “expressed doubt that the use of embedding is a defense to infringement.” Perfect 10 supplies a broad test. The court did not limit its holding to search engines or the specific way that Google utilized inline links. Indeed, Trunk Archive does not elucidate an appreciable difference between embedding technology and inline linking. “While appearances can slightly vary, the technology is still an HTML code directing content outside of a webpage to appear seamlessly on the webpage itself.” The court in Perfect 10 did not find infringement even though Google had integrated full-size images on its search results. Here, CBM Defendants also integrated (embedded) the images onto their website.(…) Besides, embedding redirects a user to the source of the content-in this case, an image hosted by a third-party server. The copyright holder could still seek relief from that server. In no way has the holder “surrender[ed] control over how, when, and by whom their work is subsequently shown.” To guard against infringement, the holder could take down the image or employ restrictions such as paywalls. Similarly, the holder could utilize “metadata tagging or visible digital watermarks to provide better protection.” (…)( In sum, Trunk Archive has not persuaded the court to ignore the “server” test. Without more, the court cannot find that CBM Defendants are barred from asserting the “embedding” defense. The court denies in part Trunk Archive’s motion for partial judgment on the pleadings.>>

Inoltre, viene negato il safe harbour in oggetto, perchè non ricorre il caso del mero storage su server proprio di materiali altrui, previsto ex lege. Infatti l’embedding era stato creato dai convenuti , prendendo i materiali da server altrui: quindi non ricorreva la passività ma l’attività , detto in breve

(notizia e link alla sentenza dal blog del prof Eric Goldman)

Il motore di ricerca è corresponsabile per associazioni indesiderate ma errate in caso di omonimia?

La risposta è negativa nel diritto USA, dato che Microsoft è coperta dal safe harbour ex § 230 CDA:

Così , confermando il 1° grado, la 1st District court of appeal della Florida, Nos. 1D21-3629 + 1D22-1321 (Consolidated for disposition) del 10 maggio 2023, White c. DISCOVERY COMMUNICATIONS, ed altri.

fatto:

Mr. White sued various nonresident defendants for damages in tort resulting from an episode of a reality/crime television show entitled “Evil Lives Here.” Mr. White alleged that beginning with the first broadcast of the episode “I Invited Him In” in August 2018, he was injured by the broadcasting of the episode about a serial killer in New York also named Nathaniel White. According to the allegations in the amended complaint, the defamatory episode used Mr. White’s photograph from a decades-old incarceration by the Florida Department of Corrections. Mr. White alleged that this misuse of his photo during the program gave viewers the impression that he and the New York serial killer with the same name were the same person thereby damaging Mr. White.

Diritto :

The persons who posted the information on the eight URLs provided by Mr. White were the “information content providers” and Microsoft was the “interactive service provider” as defined by 47 U.S.C. § 230(f)(2) and (3). See Marshall’s Locksmith Serv. Inc. v. Google, LLC, 925 F.3d 1263, 1268 (D.C. Cir. 2019) (noting that a search engine falls within the definition of interactive computer service); see also In re Facebook, Inc., 625 S.W. 3d 80, 90 (Tex. 2021) (internal citations omitted) (“The ‘national consensus’ . . . is that ‘all claims’ against internet companies ‘stemming from their publication of information created by third parties’ effectively treat the defendants as publishers and are barred.”). “By presenting Internet search results to users in a relevant manner, Google, Yahoo, and Microsoft facilitate the operations of every website on the internet. The CDA was enacted precisely to prevent these types of interactions from creating civil liability for the Providers.” Baldino’s Lock & Key Serv., Inc. v. Google LLC, 285 F. Supp. 3d 276, 283 (D.D.C. 2018), aff’d sub nom. Marshall’s Locksmith Serv., 925 F.3d at 1265.
In Dowbenko v. Google Inc., 582 Fed. App’x 801, 805 (11th Cir. 2014), the state law defamation claim was “properly dismissed” as “preempted under § 230(c)(1)” since Google, like Microsoft here, merely hosted the content created by other providers through search services. Here, as to Microsoft’s search engine service, the trial court was correct to grant summary judgment finding Microsoft immune from Mr. White’s defamation claim by operation of Section 230 since Microsoft did not publish any defamatory statement.
Mr. White argues that even if Microsoft is immune for any defamation occurring by way of its internet search engine, Microsoft is still liable as a service that streamed the subject episode. Mr. White points to the two letters from Microsoft in support of his argument. For two reasons, we do not reach whether an internet streaming service is an “interactive service provider” immunized from suit for defamation by Section 230.
First, the trial court could not consider the letters in opposition to the motion for summary judgment. The letters were not referenced in Mr. White’s written response to Microsoft’s motion. They were only in the record in response to a different defendant’s motion for a protective order. So the trial court could disregard the letters in ruling on Microsoft’s motion. See Fla. R. Civ. P. 1.510(c)(5); Lloyd S. Meisels, P.A. v. Dobrofsky, 341 So. 3d 1131, 1136 (Fla. 4th DCA 2022). Without the two letters, Mr. White has no argument that Microsoft was a publisher of the episode.
Second, even considering the two letters referenced by Mr. White, they do not show that Microsoft acted as anything but an interactive computer service. That the subject episode was possibly accessible for streaming via a Microsoft search platform does not mean that Microsoft participated in streaming or publishing the episode

(notizia e link alla sentenza dal blog del prof. Eric Goldman)

L’azione di danno ex § 512.f Copyright Act concerne solo l’abuso di copyright, non di marchio

UNITED STATES DISTRICT COURT CENTRAL DISTRICT OF CALIFORNIA, CV 22-4355-JFW(JEMx), del 21 aprile 2023,  Yuga Labs, Inc. -v- Ripps, et al., decide una lite su marchio  in causa promossa dal titolare del NFT “Bored Ape” contro un visual artist (Ripps) che lo critica: Ripps is a visual artist and creative designer who purports to create artwork that comments in the boundaries between art, the internet, and commerce. According to Defendants, Yuga has deliberately embedded racist, neo-Nazi, and alt-right dog whistles in the BAYC NFTs and associated projects.3 Beginning in approximately November 2021, Ripps began criticizing Yuga’s use of these purported racist, neo-Nazi, and alt-right dog whistles through his Twitter and Instagram profiles, podcasts, cooperation with investigative journalists, and by creating the website gordongoner.com.

Y. manda allora richieste di notice and take down (NATD) per marchio spt. ma anche per dir. di autrore.

R. reagisce azionando la disposizione nel  titolo.
Ma la corte -delle 25 richieste NATD- esamina solo quelle (quattro) che hanno portato al take down e solo quelli di copyrigjht (una), non quelle su marchio (tre). Del resto il tenore della norma è inequivoco.

E rigetta l’eccezione (o dom. riconvenzionale?): With respect to the only DMCA notice that resulted in the takedown of Defendants’ content,
Defendants have failed to demonstrate that the notice contains a material misrepresentation that
resulted in the takedown of Defendants’ content or that Yuga acted in bad faith in submitting the
takedown notice. Although Defendants argue that Yuga does not have a copyright registration for
the Ape Skull logo that was the subject of the DMCA takedown notice, a registration is not required
to own a copyright. Instead, a copyright exists at the moment copyrightable material is fixed in any
tangible medium of expression. Fourth Estate Public Benefit Corp. v. Wall-Street.com LLC, 139
S.Ct. 881, 887 (2019); see also Feist v. Publ’ns, Inc. v. Rural Tel. Serv. Co., 449 U.S. 340, 345
(1991) (holding that for a work to be copyrightable, it only needs to possess “some minimal degree
of creativity”). Moreover, courts in the Ninth Circuit have held that a logo can receive both
trademark and copyright protection. See, e.g., Vigil v. Walt Disney Co., 1995 WL 621832 (N.D.
Cal. Oct. 16, 1995).

La setnnza è itnerssante però anche -soprattutto.- per  il profili di mnarchio e concorrenza sleale circa l’uso dell’NFT.

(notizia e link alla sentenza dal blog del prof. Eric Goldman)

Recensione diffamante, a carico di avvocato, postata su Google Maps e responsabilità di quest’ultimo: opera il safe harbor ex § 230 CDA?

La risposta è positiva, naturalmente.

Si tratta di avvocato operante nelle vicinenza di Portland e diffamato da pesante recensione postata su Google maps.

Caso facile, allora,  per il Tribunale dell’Oregon Daniloff c. Google+1, 30 gennaio 2023, Case No. 3:22-cv-01271-IM .

Il prof. Eric Goldman dà pure il link alla recensione diffamante .

L’0aavvopcato attopre aveva CHJIESTO danni per 300.000 dollari a google e al recensore.

<<In evaluating Defendant Google’s immunity under the CDA, this Court applies the threefactor Ninth Circuit test. See Kimzey, 836 F.3d at 1268. First, to determine whether Defendant Google qualifies as an interactive computer service provider, this Court notes that Google is an
operator who passively provides website access to multiple users. Fair Hous. Council of San
Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157, 1162 (9th Cir. 2008) (en banc) (“A
website operator . . . [who] passively displays content that is created entirely by third parties . . .
is only a service provider with respect to that content.”). Accordingly, as Defendant Google
argues and Plaintiff concedes, Google qualifies as an interactive computer service provider. ECF
8 at 5; ECF 9 at 3; see also 47 U.S.C. § 230(f)(3); Lewis v. Google LLC, 461 F. Supp. 3d 938,
954 (N.D. Cal. 2020) (collecting cases), aff’d, 851 F. App’x 723 (9th Cir. 2021); Gaston v.
Facebook, Inc., No. 3:12-CV-0063-ST, 2012 WL 629868, at *7 (D. Or. Feb. 2, 2012), report and
recommendation adopted, No. 3:12-CV-00063-ST, 2012 WL 610005 (D. Or. Feb. 24, 2012).
Second, because Plaintiff premises his defamation claim on Defendant Google’s
publication of Defendant Keown’s review, ECF 1-1, Ex. A, at ¶ 22, this Court finds that Plaintiff
seeks to treat Google as a publisher or speaker. See Kimzey, 836 F.3d at 1268 (holding that
defamation claim based on Yelp review was “directed against Yelp in its capacity as a publisher
or speaker” (citing Barnes, 570 F.3d at 1102)).
Third, as the allegedly defamatory review was posted by Defendant Keown, ECF 1-1, Ex.
A, at ¶ 5–7, this Court finds the relevant information was provided by another information
content provider. Rather than allege that Defendant Google created the review, Plaintiff alleges
that Defendant Google “hosted” it via Plaintiff’s Google Business profile, id. at ¶ 30, thereby
“material[ly] contribut[ing]” to the defamatory review. ECF 9 at 3. An entity who “contributes
materially to the alleged illegality of the conduct” at issue is not entitled to protection under
Section 230. Roommates.com, 521 F.3d at 1168.
The Ninth Circuit addressed a similar argument in Kimzey, a case arising out of a
negative review on Yelp’s website. Kimzey, 836 F.3d at 1265. While the plaintiff in that case
claimed that Yelp had “authored” the review at issue through its star-rating system, id. at 1268,
the Ninth Circuit found that “Yelp’s rating system . . . is based on rating inputs from third parties
and . . . [is] user-generated data,” id. at 1270. As such, the Ninth Circuit held that Yelp’s actions
did not qualify as “creation” or “development” of information and that “the rating system [did]
‘absolutely nothing to enhance the defamatory sting of the message’ beyond the words offered
by the user.” Id. at 1270–71 (quoting Roommates.com, 521 F.3d at 1172).
Defendant Keown’s review similarly qualifies as user-generated data and Defendant
Google’s hosting of that review through its Google Business profile system does not qualify as a
material contribution. This Court finds that Plaintiff bases his defamation claim on a review
provided by an information content provider other than Defendant Google—thus fulfilling the
third factor required under Kimzey. See also id. at 1265 (observing that a claim “asserting that
[an interactive computer service provider is] liable in its well-known capacity as the passive host
of a forum for user reviews [is] a claim without any hope under [Ninth Circuit] precedent[]”).
Accordingly, Plaintiff’s defamation claim against Defendant Google satisfies the Ninth Circuit’s
three-factor test and Defendant Google is immune under Section 230 of the CDA.
To the extent that Plaintiff relies on Defendant Google’s refusal to remove Defendant Keown’s review in pursuing his defamation claim, ECF 1-1 at ¶ 11–17; ECF 9 at 4, this Court also holds that Defendant Google is immunized under the CDA for this decision. Roommates.com, 521 F.3d at 1170–71 (“[A]ny activity that can be boiled down to deciding whether to exclude material that third parties seek to post online is perforce immune under Case 3:22-cv-01271-IM Document 11 Filed 01/30/23 Page 6 of 7PAGE 7 – OPINION AND ORDER section 230.”); see also Barnes, 570 F.3d at 1105. Accordingly, Defendant Google’s Motion to Dismiss, ECF 8, is GRANTED.>>