Discriminazione algoritmica da parte del marketplace di Facebook e safe harbour ex § 230 CDA

Il prof. Eric Goldman segnala l’appello del 9 circuito 20.06.2023, No. 21-16499, Vargas ed altri c. Facebook , in un caso di allegata discriminazione nel proporre offerte commerciali sul suo marketplace –

La domanda: <<The operative complaint alleges that Facebook’s “targeting methods provide tools to exclude women of color, single parents, persons with disabilities and other protected attributes,” so that Plaintiffs were “prevented from having the same opportunity to view ads for housing” that Facebook users who are not in a protected class received>>.

Ebbene, il safe harbour non si applica perchè Facebook non è estraneo ma coautore della condotta illecita, in quanto cretore dell’algoritmo utilizzato nella pratica discriminatoria:

<<2. The district court also erred by holding that Facebook is immune from liability pursuant to 47 U.S.C. § 230(c)(1). “Immunity from liability exists for ‘(1) a provider or user of an interactive computer service (2) whom a plaintiff seeks to treat, under a [federal or] state law cause of action, as a publisher or speaker (3) of information provided by another information content provider.’” Dyroff v. Ultimate Software Grp., 934 F.3d 1093, 1097 (9th Cir. 2019) (quoting Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1100 (9th Cir. 2009)). We agree with Plaintiffs that, taking the allegations in the complaint as true, Plaintiffs’ claims challenge Facebook’s conduct as a co-developer of content and not merely as a publisher of information provided by another information content provider.
Facebook created an Ad Platform that advertisers could use to target advertisements to categories of users. Facebook selected the categories, such as sex, number of children, and location. Facebook then determined which categories applied to each user. For example, Facebook knew that Plaintiff Vargas fell within the categories of single parent, disabled, female, and of Hispanic descent. For some attributes, such as age and gender, Facebook requires users to supply the information. For other attributes, Facebook applies its own algorithms to its vast store of data to determine which categories apply to a particular user.
The Ad Platform allowed advertisers to target specific audiences, both by including categories of persons and by excluding categories of persons, through the use of drop-down menus and toggle buttons. For example, an advertiser could choose to exclude women or persons with children, and an advertiser could draw a boundary around a geographic location and exclude persons falling within that location. Facebook permitted all paid advertisers, including housing advertisers, to use those tools. Housing advertisers allegedly used the tools to exclude protected categories of persons from seeing some advertisements.
As the website’s actions did in Fair Housing Council of San Fernando Valley v. Roommates.com, LLC, 521 F.3d 1157 (9th Cir. 2008) (en banc), Facebook’s own actions “contribute[d] materially to the alleged illegality of the conduct.” Id. at 1168. Facebook created the categories, used its own methodologies to assign users to the categories, and provided simple drop-down menus and toggle buttons to allow housing advertisers to exclude protected categories of persons. Facebook points to three primary aspects of this case that arguably differ from the facts in Roommates.com, but none affects our conclusion that Plaintiffs’ claims challenge Facebook’s own actions>>.

Ed ecco le tre eccezioni di Facebook e relative motivazioni di rigetto del giudice:

<<First, in Roommates.com, the website required users who created profiles to self-identify in several protected categories, such as sex and sexual orientation. Id. at 1161. The facts here are identical with respect to two protected categories because Facebook requires users to specify their gender and age. With respect to other categories, it is true that Facebook does not require users to select directly from a list of options, such as whether they have children. But Facebook uses its own algorithms to categorize the user. Whether by the user’s direct selection or by sophisticated inference, Facebook determines the user’s membership in a wide range of categories, and Facebook permits housing advertisers to exclude persons in those categories. We see little meaningful difference between this case and Roommates.com in this regard. Facebook was “much more than a passive transmitter of information provided by others; it [was] the developer, at least in part, of that information.” Id. at 1166. Indeed, Facebook is more of a developer than the website in Roommates.com in one respect because, even if a user did not intend to reveal a particular characteristic, Facebook’s algorithms nevertheless ascertained that information from the user’s online activities and allowed advertisers to target ads depending on the characteristic.
Second, Facebook emphasizes that its tools do not require an advertiser to discriminate with respect to a protected ground. An advertiser may opt to exclude only unprotected categories of persons or may opt not to exclude any categories of persons. This distinction is, at most, a weak one. The website in Roommates.com likewise did not require advertisers to discriminate, because users could select the option that corresponded to all persons of a particular category, such as “straight or gay.” See, e.g., id. at 1165 (“Subscribers who are seeking housing must make a selection from a drop-down menu, again provided by Roommate[s.com], to indicate whether they are willing to live with ‘Straight or gay’ males, only with ‘Straight’ males, only with ‘Gay’ males or with ‘No males.’”). The manner of discrimination offered by Facebook may be less direct in some respects, but as in Roommates.com, Facebook identified persons in protected categories and offered tools that directly and easily allowed advertisers to exclude all persons of a protected category (or several protected categories).
Finally, Facebook urges us to conclude that the tools at issue here are “neutral” because they are offered to all advertisers, not just housing advertisers, and the use of the tools in some contexts is legal. We agree that the broad availability of the tools distinguishes this case to some extent from the website in Roommates.com, which pertained solely to housing. But we are unpersuaded that the distinction leads to a different ultimate result here. According to the complaint, Facebook promotes the effectiveness of its advertising tools specifically to housing advertisers. “For example, Facebook promotes its Ad Platform with ‘success stories,’ including stories from a housing developer, a real estate agency, a mortgage lender, a real estate-focused marketing agency, and a search tool for rental housing.” A patently discriminatory tool offered specifically and knowingly to housing advertisers does not become “neutral” within the meaning of this doctrine simply because the tool is also offered to others>>.

Discriminazione e safe harbour ex § 230 Cda in Facebook

LA Eastern district of Pennsylvania 30.09.2022  Case 2:21-cv-05325-JHS D , Amro Ealansari c. Meta, rigtta la domanda volta a censurare presunta discriminazione da parte di Facebook verso materiali islamici ivi caricati.

E’ rigettata sia nel merito , non avendo provato discrimnazione nè che F. sia public accomodation (secondo il Civil Rights Act),  sia in via pregiudiziale per l’esimente ex § 230 CDA.

Nulla di particolarmente interessante e innovativo

(notizia e  link alla sentenza dal blog del prof Eric Goldman)

Discriminazione nelle ricerche di alloggi via Facebook: manca la prova

Una domanda di accertamento di violazione del Fair Housing Act e altre leggi analoghe statali (carenza di esiti – o ingiustificata differenza di esiti rispetto ad altro soggetto di diversa etnia- dalle ricerche presuntivamente perchè eseguite da account di etnia c.d. Latina) è rigettata per carenza di prova.

Da noi si v. spt. il d. lgs. 9 luglio 2003 n. 216 e  il d . lgs. di pari data n° 215 (autore di riferimento sul tema è il prof. Daniele Maffeis in moltri scritti tra cui questo).

Nel mondo anglosassone , soprattutto statunitense, c’è un’enormità di scritti sul tema: si v. ad es. Rebecca Kelly Slaughter-Janice Kopec & Mohamad Batal, Algorithms and Economic Justice: A Taxonomy of Harms and a Path Forward for the Federal Trade Commission, Yale Journal of Law & Technology

Il giudice così scrive:

<In sum, what the plaintiffs have alleged is that they each used Facebook to search for housing based on identified criteria and that no results were returned that met their criteria. They assume (but plead no facts to support) that no results were returned because unidentified advertisers theoretically used Facebook’s Targeting Ad tools to exclude them based on their protected class statuses from seeing paid Ads for housing that they assume (again ,with no facts alleged in support) were available and would have otherwise met their criteria. Plaintiffs’ claim  that Facebook denied them access to unidentified Ads is the sort of generalized grievance that is insufficient to support standing. See, e.g., Carroll v. Nakatani, 342 F.3d 934, 940 (9th Cir. 2003) (“The Supreme Court has repeatedly refused to recognize a generalized grievance against allegedly illegal government conduct as sufficient to confer standing” and when “a government  actor discriminates on the basis of race, the resulting injury ‘accords a basis for standing only to those persons who are personally denied equal treatment.’” (quoting Allen v. Wright, 468 U.S. 737, 755 (1984)).9 Having failed to plead facts supporting a plausible injury in fact sufficient to confer standing on any plaintiff, the TAC is DISMISSED with prejudice>.

Così il Northern District of California 20 agosto 2021, Case 3:19-cv-05081-WHO , Vargas c. Facebook .

Il quale poi dice che anche rigattando quanto sorpa, F. srebbe protetta dal safe harbour ex § 230 CDA e ciò nonostante il noto precedente Roommates del 2008, dal quale il caso sub iudice si differenzia:

<<Roommates is materially distinguishable from this case based on plaintiffs’ allegations in the TAC that the nowdefunct Ad Targeting process was made available by Facebook for optional use by advertisers placing a host of different types of paidadvertisements.10 Unlike in Roommates where use of the discriminatory criteria was mandated, here use of the tools was neither mandated nor inherently discriminatory given the design of the tools for use by a wide variety of advertisers.

In Dyroff, the Ninth Circuit concluded that tools created by the website creator there, “recommendations and notifications” the website sent to users based on the users inquiries that ultimately connected a drug dealer and a drug purchaser did not turn the defendant who ontrolled the website into a content creator unshielded by CDA immunity. The panel confirmed that the tools were “meant to facilitate the communication and content of others. They are not content in and of themselves.” Dyroff, 934 F.3d 1093, 1098 (9th Cir. 2019), cert. denied, 140 S. Ct. 2761 (2020); see also Carafano v. Metrosplash.com, Inc., 339 F.3d 1119, 1124 (9th Cir. 2003) (where website “questionnaire facilitated the expression of information by individual users” including proposing sexually suggestive phrases that could facilitate the development of libelous profiles, but left “selection of the content [] exclusively to the user,” and defendant was not “responsible, even in part, for associating certain multiple choice responses with a set of physical characteristics, a group of essay answers, and a photograph,” website operator was not information content provider falling outside Section 230’s immunity); Goddard v. Google, Inc., 640 F. Supp. 2d 1193, 1197 (N.D. Cal. 2009) (no liability based on Google’s use of “Keyword Tool,” that  employs “an algorithm to suggest specific keywords to advertisers”).  

Here, the Ad Tools are neutral. It is the users “that ultimately determine what content to  post, such that the tool merely provides ‘a framework that could be utilized for proper or improper  purposes, . . . .’” Roommates, 521 F.3d at 1172 (analyzing Carafano). Therefore, even if the plaintiffs could allege facts supporting a plausible injury, their claims are barred by Section 230.>>

(notizia e link alla sentenza dal blog di Eric Goldman)

Azione contrattuale contro Youtube per discriminazione etnico/razziale respinta da una corte californiana

La corte del distretto nord della california, s. Josè division, 25.06.2021, KIMBERLY CARLESTE NEWMAN, e altri c. Google e altri, case No.20CV04011LHK, rigetta varie domande contrattuali di utenti contro Youtube, basate su pretese discrminazioni razziali.

Gli attori, gerenti canali su Youtube , ritengono di essere stati discriminati in vari modi: filtraggi ingiustificati, solo per la loro provenienza razziale, nella Restricted Mode; riduzione o impedimento delle chance di monetizzazine, non venendo agganciati ad advertisment; shadow banning e altre pratiche, ad es. qualificando i video come soggetti a Restricted Mode ( dettagli a p. 2-4).

La domanda di violazione ex sec- 1981 del 42 US CODE (Equal rights under the law: normativa antidiscriminatoria) è rigettata per assenza di prova dellelemenot intenzionaleò, p. 9 ss.

Ma qui interessa spt. il punto del Primo Ementamento, p. 15 ss: la condotta di Y,. non è state action nè tale diventa per la protezione di legge offerta dal safe harbour ex § 230 CDA (tesi alquanto astrusa, invero).

(notizia e link alla sentenza tratta dal blog di Eric Goldman)