Apple è responsabile per i danni prodotti da una sua “malicious app” oppure è protetta dal safe harbour ex § 230 CDA?

Eric Goldman ci notizia di e ci dà il link alla sentenza di appello del 9 circuito 27.03.2024,  No. 22-16514, Hadona Diep v. Apple .

Dismissed le azioni “counts I (violation of the Computer
Fraud and Abuse Act), II (violation of the Electronic Communications Privacy
Act), III (violation of California’s Consumer Privacy Act), VI (violation of
Maryland’s Wiretapping and Electronic Surveillance Act), VII (additional
violation of Maryland’s Wiretapping and Electronic Surveillance Act), VIII
(violation of Maryland’s Personal Information Protection Act), and X (negligence)
of the complaint”.

Invece il  § 230 CDA non protegge da azioni basate sulle leggi statali proconsumatori nè da altra come comncorrenza sleale:

<<The claims asserted in counts IV (violation of California’s Unfair
Competition Law (“UCL”)), V (violation of California’s Legal Remedies Act
(“CLRA”)), and IX (liability under Maryland’s Consumer Protection Act
(“MCPA”)) are not barred by the CDA. These state law consumer protection
claims do not arise from Apple’s publication decisions as to whether to authorize
Toast Plus. Rather, these claims seek to hold Apple liable for its own
representations concerning the App Store and Apple’s process for reviewing the
applications available there. Because Apple is the primary “information content
provider” with respect to those statements, section 230(c)(1) does not apply. See
Carafano v. Metrosplash.com, Inc., 339 F.3d 1119, 1124–25 (9th Cir. 2003)
(examining which party “provide[d] the essential published content”)>>.

Nemmeno queste azioni sonio impedite da patti su Limitation of liability (anche se qui di non interesse)

Il safe harbor ex § 230 CDA per Youtube in caso di accessi abusivi frodatori ad account altrui

Interessante lite sull’applicabilità del § 230 CDA statunitense a Youtube per allegate violazioni informatiche a fini di frode in account altrui (tra cui quello di Steve Wozniak) .

E’ ora giunta la decisione del 6° appellate district 15 marzo 2024,  H050042
(Santa Clara County Super. Ct. No. 20CV370338), Wozniak e altri c. Youtube), che sostanzialmente conferma il rigetto del primo grado (lasciando aperta agli attori solo  una piccola finestra).

La fattispecie è interessante per l’operatore.  Gli attori avevano diligentemente cercato di aggirare il safe harbour, argomentando in vario modo che gli addebiti a Y. erano di fatti propri (cioè di Y.), anzichè di mera condotta editoriale di informazioni altrui (per cui opera il safe harbor). Ragioni che la corte (anzi gli attori) avevano raggruppato in sei categorie:

a. Negligent security claim

b. Negligent design claim

c. Negligent failure to warn claim

d. Claims based on knowingly selling and delivering scam ads and
scam video recommendations to vulnerable users

e. Claims based on wrongful disclosure and misuse of plaintiffs’
personal information

f. Claims based on defendants’ creation or development of
information materially contributing to scam ads and videos

Ma per la corte non si può arrivare a qualificarle come condotte proprie di Y.  ai sensi del § 230 CDA , ma solo dei terzi frodatori (sub ii Material contributions, 34 ss).

Concede però parziale Leave to amend, p. 36.

I profili allegati dagli attorei sono utili pure da noi, perchè il problema è sostanzialmente simile: quello del capire se le notizie lesive pubblicate possono dirsi solo del terzo oppure anche della piattaforma (art. 6 reg. UE DSA  2022/2065)

(notizia della e link alla sentenza dal blog di Eric Goldman)

Il servizio (filtrato?) di Gmail può fruire del safe harbour ex § 230 CDA

Eric Goldman riferisce (e dà link al testo) di US District court-East. Distr. of California 24 agosto 2023, No. 2:22-cv-01904-DJC-JBP, Republican National Committee v. Google.

Il gruppo politico di destra accusa Google (G.) di filtraggi illegittimi delle sue mail.

G. si difende con successo eccependo il safe harbour ex 230.c.2.a (No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;)

Il punto critico è l’accertamento dei requisiti di objectionable  e della buona fede.

Come osserva Eric Goldman, interessante è pure la considerazione di policy della corte:

<<Section 230 also addresses Congress’s concern with the growth of unsolicited commercial electronic mail, and the fact that the volume of such mail can make email in general less usable as articulated in the CAN-SPAM Act. See 15 U.S.C. § 7701(a)(4), (6).   Permitting suits to go forward against a service provider  based on the over-filtering of mass marketing emails would discourage providers from offering spam filters or significantly decrease the number of emails segregated. It would also place courts in the business of micromanaging content providers’ filtering systems in contravention of Congress’s directive that it be the provider or user that determines what is objectionable (subject to a provider acting in bad faith). See 47 U.S.C. § 230(c)(2)(A) (providing no civil liability for “any action voluntarily taken in good faith to restrict access to . . . material that the provide or user considers to be . . . objectionable” (emphasis added)). This concern is exemplified by the fact that the study on which the RNC relies to show bad faith states that each of the three email systems had some sort of right- or left- leaning bias. (ECF No. 30-10 at 9 (“all [spam filtering algorithms] exhibited political biases in the months leading up to the 2020 US elections”).) While Google’s bias was greater than that of Yahoo or Outlook, the RNC offers no limiting principle as to how much “bias” is permissible, if any. Moreover, the study authors note that reducing the filters’ political biases “is not an easy problem to solve. Attempts to reduce the biases of [spam filtering algorithms] may inadvertently affect their efficacy.” (Id.) This is precisely the impact Congress desired to avoid in enacting the Communications Decency Act, and reinforces the conclusion that section 230 bars this suit>>.

Superare il safe harbour ex § 230 CDA di FAcebook allegando che il suo algoritmo ha contribuito a raicalizzare l’assassino

Il prof. Eric Goldman ricorda una sentenza del Distretto Sud California-Charleston 24 luglio 2023 che rigetta per safe harbour una domanda di danni verso Meta proposta dai parenti di una vittima dell’eccidio compiuto da Dylan Roof nel 2015 alla chiesa di Charleston.

Purtroppo non c’è link al testo ma c’è quello alla citazione introttiva. Nella quale è ben argomentata la ragione del superamento della posizione passiva di FAcebook.

Può essere utile anche da noi ove però superare la specificità della prevedibilità da parte della piattaforma non è facile (ma come colpa con previsione forse si)

Discriminazione algoritmica da parte del marketplace di Facebook e safe harbour ex § 230 CDA

Il prof. Eric Goldman segnala l’appello del 9 circuito 20.06.2023, No. 21-16499, Vargas ed altri c. Facebook , in un caso di allegata discriminazione nel proporre offerte commerciali sul suo marketplace –

La domanda: <<The operative complaint alleges that Facebook’s “targeting methods provide tools to exclude women of color, single parents, persons with disabilities and other protected attributes,” so that Plaintiffs were “prevented from having the same opportunity to view ads for housing” that Facebook users who are not in a protected class received>>.

Ebbene, il safe harbour non si applica perchè Facebook non è estraneo ma coautore della condotta illecita, in quanto cretore dell’algoritmo utilizzato nella pratica discriminatoria:

<<2. The district court also erred by holding that Facebook is immune from liability pursuant to 47 U.S.C. § 230(c)(1). “Immunity from liability exists for ‘(1) a provider or user of an interactive computer service (2) whom a plaintiff seeks to treat, under a [federal or] state law cause of action, as a publisher or speaker (3) of information provided by another information content provider.’” Dyroff v. Ultimate Software Grp., 934 F.3d 1093, 1097 (9th Cir. 2019) (quoting Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1100 (9th Cir. 2009)). We agree with Plaintiffs that, taking the allegations in the complaint as true, Plaintiffs’ claims challenge Facebook’s conduct as a co-developer of content and not merely as a publisher of information provided by another information content provider.
Facebook created an Ad Platform that advertisers could use to target advertisements to categories of users. Facebook selected the categories, such as sex, number of children, and location. Facebook then determined which categories applied to each user. For example, Facebook knew that Plaintiff Vargas fell within the categories of single parent, disabled, female, and of Hispanic descent. For some attributes, such as age and gender, Facebook requires users to supply the information. For other attributes, Facebook applies its own algorithms to its vast store of data to determine which categories apply to a particular user.
The Ad Platform allowed advertisers to target specific audiences, both by including categories of persons and by excluding categories of persons, through the use of drop-down menus and toggle buttons. For example, an advertiser could choose to exclude women or persons with children, and an advertiser could draw a boundary around a geographic location and exclude persons falling within that location. Facebook permitted all paid advertisers, including housing advertisers, to use those tools. Housing advertisers allegedly used the tools to exclude protected categories of persons from seeing some advertisements.
As the website’s actions did in Fair Housing Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157 (9th Cir. 2008) (en banc), Facebook’s own actions “contribute[d] materially to the alleged illegality of the conduct.” Id. at 1168. Facebook created the categories, used its own methodologies to assign users to the categories, and provided simple drop-down menus and toggle buttons to allow housing advertisers to exclude protected categories of persons. Facebook points to three primary aspects of this case that arguably differ from the facts in Roommates.com, but none affects our conclusion that Plaintiffs’ claims challenge Facebook’s own actions>>.

Ed ecco le tre eccezioni di Facebook e relative motivazioni di rigetto del giudice:

<<First, in Roommates.com, the website required users who created profiles to self-identify in several protected categories, such as sex and sexual orientation. Id. at 1161. The facts here are identical with respect to two protected categories because Facebook requires users to specify their gender and age. With respect to other categories, it is true that Facebook does not require users to select directly from a list of options, such as whether they have children. But Facebook uses its own algorithms to categorize the user. Whether by the user’s direct selection or by sophisticated inference, Facebook determines the user’s membership in a wide range of categories, and Facebook permits housing advertisers to exclude persons in those categories. We see little meaningful difference between this case and Roommates.com in this regard. Facebook was “much more than a passive transmitter of information provided by others; it [was] the developer, at least in part, of that information.” Id. at 1166. Indeed, Facebook is more of a developer than the website in Roommates.com in one respect because, even if a user did not intend to reveal a particular characteristic, Facebook’s algorithms nevertheless ascertained that information from the user’s online activities and allowed advertisers to target ads depending on the characteristic.
Second, Facebook emphasizes that its tools do not require an advertiser to discriminate with respect to a protected ground. An advertiser may opt to exclude only unprotected categories of persons or may opt not to exclude any categories of persons. This distinction is, at most, a weak one. The website in Roommates.com likewise did not require advertisers to discriminate, because users could select the option that corresponded to all persons of a particular category, such as “straight or gay.” See, e.g., id. at 1165 (“Subscribers who are seeking housing must make a selection from a drop-down menu, again provided by Roommate[s.com], to indicate whether they are willing to live with ‘Straight or gay’ males, only with ‘Straight’ males, only with ‘Gay’ males or with ‘No males.’”). The manner of discrimination offered by Facebook may be less direct in some respects, but as in Roommates.com, Facebook identified persons in protected categories and offered tools that directly and easily allowed advertisers to exclude all persons of a protected category (or several protected categories).
Finally, Facebook urges us to conclude that the tools at issue here are “neutral” because they are offered to all advertisers, not just housing advertisers, and the use of the tools in some contexts is legal. We agree that the broad availability of the tools distinguishes this case to some extent from the website in Roommates.com, which pertained solely to housing. But we are unpersuaded that the distinction leads to a different ultimate result here. According to the complaint, Facebook promotes the effectiveness of its advertising tools specifically to housing advertisers. “For example, Facebook promotes its Ad Platform with ‘success stories,’ including stories from a housing developer, a real estate agency, a mortgage lender, a real estate-focused marketing agency, and a search tool for rental housing.” A patently discriminatory tool offered specifically and knowingly to housing advertisers does not become “neutral” within the meaning of this doctrine simply because the tool is also offered to others>>.

Google è protetto dal safe harbour ex 230 CDA poer truffa da parte di un falso inserzionista (falso eBay)

La US distr. court -South. Dis. of NY Case 1:22-cv-06831-JGK, Ynfante v. Google su un caso semplice del safe harbour ex § 230 CDA:

<<In this case, it is plain that Section 230 protects Google from liability in the negligence and false advertising action brought by Mr. Ynfante. First, Google is the provider of an interactive computer service. The Court of Appeals for the Second Circuit has explained that “search engines fall within this definition,” LeadClick Media, 838 F.3d at 174, and Google is one such search engine. See, e.g., Marshall’s Locksmith Serv. Inc. v. Google, LLC, 925 F.3d 1263, 1268 (D.C. Cir. 2019) (holding that the definition of “interactive computer service” applies to Google specifically).
Second, there is no doubt that the complaint treats Google as the publisher or speaker of information. See, e.g., Compl. ¶¶ 27, 34. Section 230 “specifically proscribes liability” for “decisions relating to the monitoring, screening, and deletion of content from [a platform] — actions quintessentially related to a publisher’s role.” Green v. Am. Online (AOL), 318 F.3d 465, 471 (3d Cir. 2003). In other words, Section 230 bars any claim that “can be boiled down to the failure of an interactive computer service to edit or block user-generated content that it believes was tendered for posting online, as that is the very activity Congress sought to immunize by passing the section.” Fair Hous. Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157, 1172 n.32 (9th Cir. 2008). In this case, the plaintiff’s causes of action against Google rest solely on the theory that Google did not block a third-party advertisement for publication on its search pages. But for Google’s publication of the advertisement, the plaintiff would not have been harmed. See, e.g., Compl. ¶¶ 38-39, 61. The plaintiff therefore seeks to hold Google liable for its actions related to the screening, monitoring, and posting of content, which fall squarely within the exercise of a publisher’s role and are therefore subject to Section 230’s broad immunity.
Third, the scam advertisement came from an information content provider distinct from the defendant. As the complaint acknowledges, the advertisement was produced by a third party who then submitted the advertisement to Google for publication. See id. ¶ 26. It is therefore plain that the complaint is seeking to hold the defendant liable for information provided by a party other than the defendant and published on Google’s platform, which Section 230 forecloses>>

Niente di nuovo.

(notizia e link alla sentenza dal blog del prof. Eric Goldman)

Il motore di ricerca è corresponsabile per associazioni indesiderate ma errate in caso di omonimia?

La risposta è negativa nel diritto USA, dato che Microsoft è coperta dal safe harbour ex § 230 CDA:

Così , confermando il 1° grado, la 1st District court of appeal della Florida, Nos. 1D21-3629 + 1D22-1321 (Consolidated for disposition) del 10 maggio 2023, White c. DISCOVERY COMMUNICATIONS, ed altri.

fatto:

Mr. White sued various nonresident defendants for damages in tort resulting from an episode of a reality/crime television show entitled “Evil Lives Here.” Mr. White alleged that beginning with the first broadcast of the episode “I Invited Him In” in August 2018, he was injured by the broadcasting of the episode about a serial killer in New York also named Nathaniel White. According to the allegations in the amended complaint, the defamatory episode used Mr. White’s photograph from a decades-old incarceration by the Florida Department of Corrections. Mr. White alleged that this misuse of his photo during the program gave viewers the impression that he and the New York serial killer with the same name were the same person thereby damaging Mr. White.

Diritto :

The persons who posted the information on the eight URLs provided by Mr. White were the “information content providers” and Microsoft was the “interactive service provider” as defined by 47 U.S.C. § 230(f)(2) and (3). See Marshall’s Locksmith Serv. Inc. v. Google, LLC, 925 F.3d 1263, 1268 (D.C. Cir. 2019) (noting that a search engine falls within the definition of interactive computer service); see also In re Facebook, Inc., 625 S.W. 3d 80, 90 (Tex. 2021) (internal citations omitted) (“The ‘national consensus’ . . . is that ‘all claims’ against internet companies ‘stemming from their publication of information created by third parties’ effectively treat the defendants as publishers and are barred.”). “By presenting Internet search results to users in a relevant manner, Google, Yahoo, and Microsoft facilitate the operations of every website on the internet. The CDA was enacted precisely to prevent these types of interactions from creating civil liability for the Providers.” Baldino’s Lock & Key Serv., Inc. v. Google LLC, 285 F. Supp. 3d 276, 283 (D.D.C. 2018), aff’d sub nom. Marshall’s Locksmith Serv., 925 F.3d at 1265.
In Dowbenko v. Google Inc., 582 Fed. App’x 801, 805 (11th Cir. 2014), the state law defamation claim was “properly dismissed” as “preempted under § 230(c)(1)” since Google, like Microsoft here, merely hosted the content created by other providers through search services. Here, as to Microsoft’s search engine service, the trial court was correct to grant summary judgment finding Microsoft immune from Mr. White’s defamation claim by operation of Section 230 since Microsoft did not publish any defamatory statement.
Mr. White argues that even if Microsoft is immune for any defamation occurring by way of its internet search engine, Microsoft is still liable as a service that streamed the subject episode. Mr. White points to the two letters from Microsoft in support of his argument. For two reasons, we do not reach whether an internet streaming service is an “interactive service provider” immunized from suit for defamation by Section 230.
First, the trial court could not consider the letters in opposition to the motion for summary judgment. The letters were not referenced in Mr. White’s written response to Microsoft’s motion. They were only in the record in response to a different defendant’s motion for a protective order. So the trial court could disregard the letters in ruling on Microsoft’s motion. See Fla. R. Civ. P. 1.510(c)(5); Lloyd S. Meisels, P.A. v. Dobrofsky, 341 So. 3d 1131, 1136 (Fla. 4th DCA 2022). Without the two letters, Mr. White has no argument that Microsoft was a publisher of the episode.
Second, even considering the two letters referenced by Mr. White, they do not show that Microsoft acted as anything but an interactive computer service. That the subject episode was possibly accessible for streaming via a Microsoft search platform does not mean that Microsoft participated in streaming or publishing the episode

(notizia e link alla sentenza dal blog del prof. Eric Goldman)