_id
stringlengths
34
34
incident_id
int64
1
524
date
timestamp[ns]
reports
stringlengths
4
191
Alleged deployer of AI system
stringlengths
7
214
Alleged developer of AI system
stringlengths
7
127
Alleged harmed or nearly harmed parties
stringlengths
8
371
description
stringlengths
50
371
title
stringlengths
6
170
year
int64
1.98k
2.02k
spacy_negative_outcomes
stringlengths
3
54
keybert_negative_outcomes
stringlengths
2
41
Cluster
stringclasses
5 values
ObjectId(62f9ee8077f5af9ce45a140a)
290
2022-06-03T00:00:00
[1900,2413,2414]
["toronto-city-government"]
["toronto-public-health"]
["sunnyside-beachgoers","marie-curtis-beachgoers","toronto-citizens"]
Toronto’s use of AI predictive modeling (AIPM) which had replaced existing methodology as the only determiner of beach water quality raised concerns about its accuracy, after allegedly conflicting results were found by a local water advocacy group using traditional means.
False Negatives for Water Quality-Associated Beach Closures
2,022
allegedly conflicting results
methodology
bias, content, false
ObjectId(62fcb32d4a3f91af3d48436f)
296
2016-02-10T00:00:00
[1918,1919,1920]
["twitter"]
["twitter"]
["twitter-left-leaning-politicians","twitter-left-leaning-news-organizations","twitter-left-leaning-users","twitter-users"]
Twitter’s “Home” timeline algorithm was revealed by its internal researchers to have amplified tweets and news of rightwing politicians and organizations more than leftwing ones in six out of seven studied countries.
Twitter Recommender System Amplified Right-Leaning Tweets
2,016
ones
twitter
bias, content, false
ObjectId(630c8eb443fe03f46cc8bc7f)
313
2022-08-25T00:00:00
[1967]
["meta"]
["meta"]
["marietje-schaake"]
Meta’s conversational AI BlenderBot 3, when prompted “who is a terrorist,“ responded with an incumbent Dutch politician’s name, who was confused about its association.
BlenderBot 3 Cited Dutch Politician as a Terrorist
2,022
its association
name
bias, content, false
ObjectId(6321568425ffe34eb014af4a)
333
2021-03-17T00:00:00
[2045,2135,2136,2137,2146,2148]
["tesla"]
["tesla"]
["unnamed-22-year-old-male-driver","tesla-drivers"]
A Tesla Model Y on Autopilot collided with a parked Michigan State Police (MSP) car which had its emergency lights on, in Eaton County, Michigan, although no one was injured.
Tesla on Autopilot Crashed Parked Michigan Police Car on Interstate
2,021
no one
car
bias, content, false
ObjectId(630367c881052814ccfd26f3)
304
2021-11-03T00:00:00
[1941]
["tesla"]
["tesla"]
["unnamed-tesla-driver","tesla-drivers"]
A Tesla Model Y in Full Self-Driving (FSD) mode drove into the wrong lane after making a left turn despite its driver allegedly attempting to overtake its driving, resulting in a non-fatal collision with another vehicle in the wrong lane in Brea, California.
Tesla on FSD Reportedly Drove into the Wrong Lane in California
2,021
the wrong lane
wrong lane
bias, content, false
ObjectId(63204d8bc912bf8020e381f8)
331
2020-08-05T00:00:00
[2022,2023]
["instagram"]
["instagram"]
["instagram-users"]
A bug was reported by Instagram’s spokesperson to have prevented an algorithm from populating related hashtags for thousands of hashtags, resulting in an allege preferential treatment for some politically partisan hashtags.
Bug in Instagram’s “Related Hashtags” Algorithm Allegedly Caused Disproportionate Treatment of Political Hashtags
2,020
A bug
bug
bias, content, false
ObjectId(63342b4609b0dac2f0bc4198)
337
2021-04-17T00:00:00
[2049,2071,2072,2112,2115,2116,2117,2118,2237]
["tesla"]
["tesla"]
["william-varner","unnamed-passenger"]
A 2019 Tesla Model S was reportedly traveling on Adaptive Cruise Control (ACC) at high speed before crashing into a tree near The Woodlands in Spring, Texas, killing two people.
Tesla Model S on ACC Crashed into Tree in Texas, Killing Two People
2,021
two people
acc
bias, content, false
ObjectId(633d286d399f7471b5c10035)
346
2016-06-15T00:00:00
[2059,2062,2108,2109,2110]
["henn-na-hotel"]
["unknown"]
["henn-na-hotel-guests","henn-na-hotal-staff"]
A number of robots employed by a hotel in Japan were reported by guests in a series of complaints for failing to handle tasks such as answering scheduling questions or making passport copies without human intervention.
Robots in Japanese Hotel Annoyed Guests and Failed to Handle Simple Tasks
2,016
A number
complaints
bias, content, false
ObjectId(62f3c2b9a076fc957e6080f8)
283
2018-07-02T00:00:00
[1880]
["facebook"]
["facebook"]
["the-vindicator"]
Facebook’s content moderation algorithm was acknowledged by the company to have flagged excerpts of the Declaration of Independence posted by a small newspaper in Texas as hate speech by mistake.
Facebook’s Automated Content Moderation Tool Flagged a Post Containing Parts of the Declaration of Independence as Hate Speech by Mistake
2,018
hate speech
hate speech
bias, content, false
ObjectId(62f4ae6e0658670483d4835d)
286
2021-02-26T00:00:00
[1889,2052,2381]
["tiktok"]
["tiktok"]
["lalani-erika-renee-walton","arriani-jaileen-arroyo","lalani-erika-renee-walton's-family","arriani-jaileen-arroyo's-family","tiktok-young-users","tiktok-users"]
TikTok’s recommendation algorithm was alleged in a lawsuit to have intentionally and repeatedly pushed videos of the “blackout” challenge onto children’s feeds, incentivizing their participation which ultimately resulted in the death of two young girls.
TikTok’s "For You" Allegedly Pushed Fatal “Blackout” Challenge Videos to Two Young Girls
2,021
the death
lawsuit
bias, content, false
ObjectId(630498489a95e74856267248)
307
2017-11-01T00:00:00
[1947]
["apple"]
["apple"]
["iphone-face-id-users","iphone-x-face-id-users"]
The Face ID feature on iPhone allowing users to unlock their phones via facial recognition was reported by users for not recognizing their faces in the morning.
iPhone Face ID Failed to Recognize Users’ Morning Faces
2,017
their faces
faces
bias, content, false
ObjectId(6305dcfaaf7cc5438e28f38c)
309
2017-08-26T00:00:00
[1954,1956,1960,2030]
["metropolitan-police-service"]
["unknown"]
["notting-hill-carnival-goers"]
The facial recognition trial by London’s Metropolitan Police Service at the Notting Hill Carnival reportedly performed poorly with a high rate of false positives.
Facial Recognition Trial Performed Poorly at Notting Hill Carnival
2,017
false positives
false positives
bias, content, false
ObjectId(631834963d3a94e2438bd339)
327
2015-03-24T00:00:00
[2011]
["facebook"]
["facebook"]
["facebook-users-having-posts-about-painful-events","facebook-users"]
Facebook’s “On This Day” algorithm which highlighted past posts on a user’s private page or News Feed confronted unwanted and painful personal memories to its users.
Facebook’s On-This-Day Feature Mistakenly Showed Painful Memories to Users
2,015
unwanted and painful personal memories
painful personal memories
bias, content, false
ObjectId(633c45d3399f7471b597a077)
344
2021-07-01T00:00:00
[2057]
["myinterview","curious-thing"]
["myinterview","curious-thing"]
["job-candidates-using-myinterview","job-candidates-using-curious-thing","employers-using-myinterview","employers-using-curious-thing"]
Two AI interview softwares provided positive but invalid results such as "competent" English proficiency and high match percentage for interview responses given in German by reporters.
Hiring Algorithms Provided Invalid Positive Results for Interview Responses in German
2,021
positive but invalid results
invalid results
bias, content, false
ObjectId(6342746c7349da35faffd3ee)
348
2020-11-01T00:00:00
[2064,2075,2096]
["youtube"]
["youtube"]
["youtube-users-skeptical-of-us-election-results"]
YouTube's recommendation algorithm allegedly pushed 2020's US Presidential Election fraud content to users most skeptical of the election's legitimacy disproportionately compared to least skeptical users.
YouTube Recommendation Reportedly Pushed Election Fraud Content to Skeptics Disproportionately
2,020
least skeptical users
least skeptical users
bias, content, false
ObjectId(634282e6c5ced0d56b1d0012)
349
2022-03-22T00:00:00
[2065,2095]
["charlotte-mecklenburg-school-district"]
["evolv-technology"]
["students-at-charlotte-mecklenburg-schools","teachers-at-charlotte-mecklenburg-schools","security-officers-at-charlotte-mecklenburg-schools"]
Evolv's AI-based weapons detection system reportedly produced excessive false positives, mistaking everyday school items for weapons and pulling schools' security personnel for manual checking.
Evolv's Gun Detection False Positives Created Problems for Schools
2,022
excessive false positives
excessive false positives
bias, content, false
ObjectId(62f2570d867302aca4ac4572)
277
2022-01-14T00:00:00
[1865]
["15.ai"]
["15.ai"]
["15.ai","15.ai-users"]
An AI-synthetic audio sold as an NFT on Voiceverse’s platform was acknowledged by the company for having been created by 15.ai, a free web app specializing in text-to-speech and AI-voice generation, and reused without proper attribution.
Voices Created Using Publicly Available App Stolen and Resold as NFT without Attribution
2,022
an NFT
nft
bias, content, false
ObjectId(6305f639af7cc5438e301103)
311
2020-05-02T00:00:00
[1961,1962]
["youtube"]
["youtube"]
["women-of-sex-tech-conference-attendants","women-of-sex-tech-conference-organizers"]
YouTube’s automated content moderation tool erroneously removed The Women of Sex Tech conference’s live-streamed event and banned the conference from the platform, despite not violating the platform’s sexual content policies.
YouTube Auto-Moderation Mistakenly Banned Women of Sex Tech Conference
2,020
the platform
sexual content policies
bias, content, false
ObjectId(630dbeae9451321cff796216)
317
2020-03-17T00:00:00
[1976]
["facebook"]
["facebook"]
["facebook-users-posting-legitimate-covid-19-news","facebook-users"]
Facebook was reported by users for blocking posts of legitimate news about the coronavirus pandemic, allegedly due to a bug in an anti-spam system.
Bug in Facebook’s Anti-Spam Filter Allegedly Blocked Legitimate Posts about COVID-19
2,020
a bug
bug
bias, content, false
ObjectId(631704baa7aa86620c9827e8)
324
2019-11-12T00:00:00
[1998,1999,2000,2001,2002,2003]
["the-bl"]
["unknown"]
["instagram-users","facebook-users"]
A large network of pages, groups, and fake accounts having GAN-generated face photos associated with The BL, a US-based media outlet, reportedly bypassed Facebook moderation systems to push "pro-Trump" narratives on its platform and Instagram.
GAN Faces Deployed by The BL's Fake Account Network to Push Pro-Trump Content on Meta Platforms
2,019
fake accounts
bl
bias, content, false
ObjectId(63035dc822c28e977359610b)
303
2022-08-21T00:00:00
[1940,1944]
["google"]
["google"]
["a-software-engineer-named-mark","parents-using-telemedicine-services"]
Google’s automated detection of abusive images of children incorrectly flagged a parent’s photo intended for a healthcare provider, resulting in a false police report of child abuse, and loss of access to his online accounts and information.
Google’s Automated Child Abuse Detection Wrongfully Flagged a Parent’s Naked Photo of His Child
2,022
abusive images
abusive images
bias, content, false
ObjectId(63369976589105516e51189b)
339
2022-09-15T00:00:00
[2051,2063,2491,2511,2516,2539,2540,2575,2576,2593,2601,2634,2643,2755]
["students"]
["sudowrite","openai"]
["teachers","non-cheating-students","cheating-students"]
Students were reportedly using open-source text generative models such as GPT-3 and ChatGPT to complete school assignments and exams such as writing reports, essays.
Open-Source Generative Models Abused by Students to Cheat on Assignments and Exams
2,022
using
source text
bias, content, false
ObjectId(633c459b6ee03859f96820ab)
343
2021-07-11T00:00:00
[2056,2113]
["facebook","instagram","twitter"]
["facebook","instagram","twitter"]
["marcus-rashford","jadon-sancho","bukayo-saka","facebook-users","instagram-users","twitter-users"]
Facebook's, Instagram's, and Twitter's automated content moderation failed to proactively remove racist remarks and posts directing at Black football players after finals loss, allegedly largely relying on user reports of harassment.
Facebook, Instagram, and Twitter Failed to Proactively Remove Targeted Racist Remarks via Automated Systems
2,021
racist remarks
harassment
bias, content, false
ObjectId(630350a5c971a26b3b4d6134)
302
2021-03-15T00:00:00
[1939]
["geisel-school-of-medicine"]
["geisel-school-of-medicine's-technology-staff","canvas"]
["sirey-zhang","geisel-school-of-medicine's-students","geisel-school-of-medicine's-professors","geisel-school-of-medicine's-accused-students"]
Dartmouth's Geisel School of Medicine allegedly falsely accused students of cheating during remote exams using an internally built system which tracked student activity patterns without their knowledge on its learning management platform.
Students Allegedly Wrongfully Accused of Cheating via Medical School's Internal Software
2,021
their knowledge
dartmouth
bias, content, false
ObjectId(6305cb242b1af2bc7c3e34b8)
308
2017-07-03T00:00:00
[1952,1953]
["boston-dynamics"]
["boston-dynamics"]
["none"]
Boston Dynamics’s autonomous robot Atlas allegedly caught its foot on a stage light, resulting in a fall off the stage at the Congress of Future Science and Technology Leaders conference.
Atlas Robot Fell off Stage at Conference
2,017
a fall
fall
bias, content, false
ObjectId(6305e6d7af7cc5438e2ac401)
310
2017-06-03T00:00:00
[1955,1957,1958,1959,2126,2127,2128,2269]
["south-wales-police"]
["nec"]
["finals-attendees","falsely-accused-finals-attendees"]
South Wales Police (SWP)’s automated facial recognition (AFR) at the Champion's League Final football game in Cardiff wrongly identified innocent people as potential matches at an extremely high false positive rate of more than 90%.
High False Positive Rate by SWP's Facial Recognition Use at Champion's League Final
2,017
an extremely high false positive rate
high false positive rate
bias, content, false
ObjectId(630c99e90212a2e7e79de7da)
315
2016-04-09T00:00:00
[1970]
["ntechlab"]
["ntechlab"]
["russian-pornographic-actresses","russian-sex-workers"]
The facial recognition software FindFace allowing its users to match photos to people’s social media pages on Vkontakte was reportedly abused to de-anonymize and harass Russian women who appeared in pornography and alleged sex workers.
Facial Recognition Service Abused to Target Russian Porn Actresses
2,016
alleged sex workers
pornography
bias, content, false
ObjectId(630f057e690071b517189ef4)
321
2018-03-23T00:00:00
[188,190,194,195,199,209,212,1988,1989,1990,1995,200]
["tesla"]
["tesla"]
["walter-huang","walter-huang's-family"]
A Tesla Model X P100D operating on Autopilot's Traffic-Aware Cruise Control (TACC) and Autosteer system allegedly accelerated above the speed limit of a highway in Mountain View, California, and steered itself directly into a barrier, resulting in its driver’s death.
Tesla Model X on Autopilot Crashed into California Highway Barrier, Killing Driver
2,018
a barrier
tacc
bias, content, false
ObjectId(631845f4a7aa86620cb95d56)
329
2017-09-18T00:00:00
[2015]
["amazon"]
["amazon"]
["amazon-users"]
Amazon was reported to have shown chemical combinations for producing explosives and incendiary devices as “frequently bought together” items via automated recommendation.
Amazon Recommended Explosive-Producing Ingredients as “Frequently Bought Together” Items for Chemicals
2,017
” items
items
bias, content, false
ObjectId(62f3d763614ca995dff23c49)
284
2018-05-01T00:00:00
[1882,1883,1884,1885,1886,1887]
["facebook"]
["facebook"]
["museums-on-facebook","facebook-users-interested-in-arts","facebook-users"]
Facebook’s removal of posts featuring renowned artworks by many historical artists and their promotional content due to nudity via both automated and human-moderated means were condemned by critics, such as museums and tourism boards, as cultural censorship and prevention of artwork promotion.
Facebook’s Automated Removal of Content Featuring Nudity-Containing Artworks Denounced as Censorship
2,018
cultural censorship
cultural censorship
bias, content, false
ObjectId(62ff8d332b190ab329b567f9)
298
2021-10-21T00:00:00
[2471]
["yuen-ler-chow"]
["yuen-ler-chow"]
["thefacetag-app-users"]
TheFaceTag app, a social networking app developed and deployed within-campus by a student at Harvard raised concerns surrounding its facial recognition, cybersecurity, privacy, and misuse. This incident has been downgraded to an issue as it does not meet current ingestion criteria.
Student-Developed Facial Recognition App Raised Ethical Concerns
2,021
an issue
facial recognition
bias, content, false
ObjectId(62f35eea867302aca4e2b895)
279
2019-07-01T00:00:00
[1869,1870,2381]
["tiktok"]
["tiktok"]
["tiktok-young-users","tiktok-users"]
TikTok’s young users were allegedly exposed to community-guideline-violating pro-eating disorder content on their algorithmically curated “For You” page that serves videos from any user on its platform.
TikTok’s “For You” Algorithm Exposed Young Users to Pro-Eating Disorder Content
2,019
its platform
platform
bias, content, false
ObjectId(62f3bb424240948816cde2a4)
281
2019-02-04T00:00:00
[1875,1876,1877]
["youtube"]
["youtube"]
["youtube-young-users","youtube-users"]
Terms-of-service-violating videos related to suicide and self-harm reportedly bypassed YouTube’s content moderation algorithms, allegedly resulting in exposure of graphic content to young users via recommended videos.
YouTube's Algorithms Failed to Remove Violating Content Related to Suicide and Self-Harm
2,019
graphic content
suicide
bias, content, false
ObjectId(630c86a4707bde9384fd94ee)
312
2021-08-15T00:00:00
[1963,1964,1965,1966]
["sanas"]
["sanas"]
["call-center-agents-having-non-midwestern-american-accent","people-having-non-midwestern-american-accent"]
A startup’s use of AI voice technology to alter or remove accents for call center agents was scrutinized by critics as reaffirming bias, despite the company’s claim.
Startup's Accent Translation AI Denounced as Reinforcing Racial Bias
2,021
bias
bias
bias, content, false
ObjectId(630ca22388619542799c19ec)
316
2016-06-02T00:00:00
[1971]
["facebook"]
["facebook"]
["facebook-users"]
Facebook’s advertisement-approval algorithm was reported by a security analyst to have neglected simple checks for domain URLs, leaving its users at risk of fraudulent ads.
Facebook Ad-Approval Algorithm Allegedly Missed Fraudulent Ads via Simple URL Checks
2,016
fraudulent ads
fraudulent ads
bias, content, false
ObjectId(62f4c81b0658670483dbe51b)
288
2019-01-30T00:00:00
[1895,1896,2025,2026]
["woodbridge-police-department"]
["unknown"]
["nijeer-parks"]
Woodbridge Police Department falsely arrested an innocent Black man following a misidentification by their facial recognition software, who was jailed for more than a week and paid thousands of dollar for his defense.
New Jersey Police Wrongful Arrested Innocent Black Man via FRT
2,019
a misidentification
misidentification
bias, content, false
ObjectId(62f4d2aa4a3f91af3dd48640)
289
2020-06-15T00:00:00
[1897,1898]
["starship-technologies"]
["starship-technologies"]
["jisuk-mok","frisco-residents"]
A Starship food delivery robot crashed into the front bumper of a vehicle waiting at a stoplight intersection in Frisco, Texas, the video of which the company reportedly refused to release.
Starship Delivery Robot Scuffed Bumper of a Resident’s Car in Texas, Allegedly Refusing to Release Footage of the Accident
2,020
a vehicle
vehicle
bias, content, false
ObjectId(63048a1b3359229b334cee96)
306
2016-05-26T00:00:00
[1946,1948,1949]
["tesla"]
["tesla"]
["unnamed-tesla-owner","tesla-drivers"]
A Tesla Model S operating on the Traffic-Aware Cruise Control (TACC) feature of Autopilot was shown on video by its driver crashing into a parked van on a European highway in heavy traffic, which damaged the front of the car.
Tesla on Autopilot TACC Crashed into Van on European Highway
2,016
the front
tacc
bias, content, false
ObjectId(633c45fd399f7471b597a758)
345
2021-04-13T00:00:00
[2058]
["insurance-companies"]
["ccc-information-services","tractable"]
["vehicle-repair-shops","vehicle-owners"]
Auto-insurance companies' photo-based estimation of repair price was alleged by repair shop owners and industry groups as providing inaccurate estimates, causing damaged cars to stay in the shop longer.
Auto-Insurance Photo-Based Estimation Allegedly Gave Inaccurate Repair Prices Frequently
2,021
inaccurate estimates
estimation
bias, content, false
ObjectId(630349169b0efe36c58855ab)
301
2022-02-15T00:00:00
[1938]
["broward-college"]
["honorlock"]
["unnamed-florida-teenager"]
Broward College’s use of remote proctoring system and reliance on its flagging algorithm allegedly led to a wrongful accusation of academic dishonesty in a biology exam of a Florida teenager.
Teenager at Broward College Allegedly Wrongfully Accused of Cheating via Remote Proctoring
2,022
a wrongful accusation
wrongful accusation
bias, content, false
ObjectId(630c923888619542799a0c42)
314
2022-08-17T00:00:00
[1968]
["stability-ai"]
["stability-ai","runway","laion","eleutherai","compvis-lmu"]
["stability-ai","deepfaked-celebrities"]
Stable Diffusion, an open-source image generation model by Stability AI, was reportedly leaked on 4chan prior to its release date, and was used by its users to generate pornographic deepfakes of celebrities.
Stable Diffusion Abused by 4chan Users to Deepfake Celebrity Porn
2,022
pornographic deepfakes
pornographic deepfakes
bias, content, false
ObjectId(630f18e003b40739e3f018e6)
322
2019-12-07T00:00:00
[1991,1994]
["tesla"]
["tesla"]
["connecticut-state-police"]
A Tesla Model 3 on Autopilot slammed into a parked car of patrol police officers who stopped to assist a stranded motorist on the interstate in Norwalk, Connecticut.
Tesla Model 3 Crashed into Police Patrol Car on Connecticut Highway
2,019
a stranded motorist
car
bias, content, false
ObjectId(633423b68b9212a310a337bc)
336
2015-03-01T00:00:00
[2048,2119,2120,2121]
["uk-home-office"]
["uk-home-office"]
["uk-immigrant-newlyweds"]
UK Home Office's opaque algorithm to detect sham marriages flagged some nationalities for investigation more than others, raising fears surrounding discrimination based on nationality and age.
UK Home Office's Sham Marriage Detection Algorithm Reportedly Flagged Certain Nationalities Disproportionately
2,015
some nationalities
discrimination
bias, content, false
ObjectId(63428c5563b61b7fa042db22)
351
2022-09-13T00:00:00
[2068]
["@tengazillioiniq"]
["unknown"]
["halle-bailey","black-actresses"]
A Twitter user reportedly modified using generative AI a short clip of Disney's 2022 version of "The Little Mermaid," replacing a Black actress with a white digital character.
"The Little Mermaid" Clip Doctored Using Generative AI to Replace Black Actress with White Character
2,022
a short clip
generative ai
bias, content, false
ObjectId(6347c85bf149ac829bebe6ac)
364
2020-04-15T00:00:00
[2131]
["walmart"]
["everseen"]
["walmart-employees"]
Walmart's theft-deterring bagging-detection system allegedly exposed workers to health risks during the coronavirus pandemic when its false positives prompted workers to unnecessarily step in to resolve the issue.
Walmart's Bagging-Detection False Positives Exposed Workers to Health Risk
2,020
its false positives
false positives
bias, content, false
ObjectId(6347d11ef149ac829beda3a3)
367
2020-06-17T00:00:00
[2150]
["openai","google"]
["openai","google"]
["gender-minority-groups","racial-minority-groups","underrepresented-groups-in-training-data"]
Unsupervised image generation models trained using Internet images such as iGPT and SimCLR were shown to have embedded racial, gender, and intersectional biases, resulting in stereotypical depictions.
iGPT, SimCLR Learned Biased Associations from Internet Training Data
2,020
intersectional biases
intersectional biases
bias, content, false
ObjectId(63429a302acc51f55c0d97ad)
355
2018-07-07T00:00:00
[2081,2082,2083,2903]
["uber"]
["uber"]
["uber-drivers"]
Uber was alleged in a lawsuit to have wrongfully accused its drivers in the UK and Portugal of fraudulent activity through automated systems, which resulted in their dismissal without a right to appeal.
Uber Allegedly Wrongfully Accused Drivers of Fraud via Automated Systems
2,018
fraudulent activity
dismissal
bias, content, false
ObjectId(63429b21c5ced0d56b1f9725)
356
2020-09-15T00:00:00
[2084,2085]
["murat-ayfer"]
["murat-ayfer","openai"]
["historically-disadvantaged-groups"]
Philosopher AI as built on top of GPT-3 was reported by its users for having strong tendencies to produce offensive results when given prompts on certain topics such as feminism and Ethiopia.
Philosophy AI Tentatively Produced Offensive Results for Certain Prompts
2,020
offensive results
offensive results
bias, content, false
ObjectId(6347dbceed247984c90d6b27)
368
2016-06-01T00:00:00
[2151,2152,2153,2154,2155,2156,2157,2159,2160,2161]
["the-israel-military"]
["anyvision"]
["palestinians-residing-in-the-west-bank"]
A controversial surveillance program involving facial recognition and algorithmic recommendation, Blue Wolf, was deployed by the Israeli military to monitor Palestinians in the West Bank.
Facial Recognition Smart Phone App "Blue Wolf" Monitored Palestinians in West Bank
2,016
facial recognition
facial recognition
system, recognition, facial
ObjectId(634d1f8f7448b116a2eba9cf)
370
2017-09-27T00:00:00
[2163]
["google"]
["google"]
["google's-competitor-shopping-services"]
Google was fined by EU Commission for changing its shopping algorithms in Europe to favor its own comparison service over competitors, resulting in anti-competitive effects.
Google Fined for Changing Shopping Algorithms in EU to Favor Own Service
2,017
anti-competitive effects
competitors
bias, content, false
ObjectId(635f0120e6a5db6da1d3a483)
378
2022-04-06T00:00:00
[2175,2176]
["tusimple"]
["tusimple"]
["tusimple","state-of-arizona"]
A TuSimple autonomous truck operating with backup drivers behind the wheel operated on an outdated command sequence and suddenly veered into the center divide on the interstate freeway.
TuSimple Truck Steered into Interstate Freeway Divide
2,022
an outdated command sequence
center divide
bias, content, false
ObjectId(636dffb0411dcebbcc969fd5)
391
2022-07-26T00:00:00
[2244,2246]
["southern-co-op"]
["hikvision"]
["souther-co-op-customers"]
Southern Co-op's use of facial recognition reportedly to curb violent crime in UK supermarkets was alleged by civil society and privacy groups as "unlawful" and "complete" invasion of privacy.
Facial Recognition Trial by UK Southern Co-op Alleged as Unlawful
2,022
violent crime
violent crime
bias, content, false
ObjectId(6356adfd642a3e49ac4c13a9)
371
2019-11-29T00:00:00
[2167,2184,2203]
["ugandan-government"]
["huawei"]
["political-opposition-in-uganda"]
Huawei's AI systems involving facial recognition were reportedly deployed by the Ugandan government to monitor political opposition actors and anti-regime sentiments, which raised fears of surveillance and suppression of individual freedoms.
Uganda Deployed Huawei's Facial Recognition to Monitor Political Opposition and Protests
2,019
anti-regime sentiments
facial recognition
bias, content, false
ObjectId(636e0f5e0a00a4f89b1146a3)
393
2021-12-08T00:00:00
[2247]
["facebook"]
["facebook"]
["facebook-users-speaking-swahili","facebook-users-speaking-english","facebook-users"]
Facebook's ad moderation system involving algorithms failed to flag hateful language and violating content such as calls for killings for ads in English and Swahili.
Facebook AI-Supported Moderation for Ads Failed to Detect Violating Content
2,021
hateful language
violating content
bias, content, false
ObjectId(63429c05ad13a1c2fb5b52a1)
358
2018-06-01T00:00:00
[2089]
["cadillac-fairview"]
["unknown"]
["chinook-centre-mall-goers","market-mall-goers"]
Facial recognition (FRT) was reportedly deployed in some Calgary-area malls to approximate customer age and gender without explicit consent, which a privacy expert warned was a cause for concern.
Calgary Malls Deployed Facial Recognition without Customer Consent
2,018
a cause
frt
bias, content, false
ObjectId(6347c81eb55a37b65883b48c)
363
2021-01-15T00:00:00
[2130]
["facebook"]
["facebook"]
["facebook-users-posting-about-plymouth-hoe","facebook-users-in-plymouth-hoe","plymouth-hoe-residents"]
Facebook's automated system mistakenly labelled posts featuring the seafaring landmark Plymouth Hoe as misogynistic.
Facebook's Automated Moderation Mistakenly Flagged Landmark's Name as Offensive
2,021
Plymouth Hoe
system
bias, content, false
ObjectId(634d1c221ad286865cf8d200)
369
2022-08-29T00:00:00
[2162]
["jason-allen"]
["midjourney"]
["artists-submitting-in-the-digital-arts-category","digital-artists","artists"]
An artwork generated using generative AI won first place in the digital arts category of the Colorado State Fair's art competition, which raised concerns surrounding labor displacement and unfair competition.
GAN Artwork Won First Place at State Fair Competition
2,022
unfair competition
artwork
bias, content, false
ObjectId(637e37b4f569208079ed32e7)
400
2022-02-23T00:00:00
[2273]
["google"]
["google"]
["women-in-need-of-abortion-services","women-having-unexpected-or-crisis-pregnancies"]
Google Search reportedly returned fewer abortion clinics for searches from poorer and rural areas, particularly ones with Targeted Regulation of Abortion Providers (TRAP) laws.
Google Search Returned Fewer Results for Abortion Services in Rural Areas
2,022
particularly ones
searches
bias, content, false
ObjectId(635780b830a3a8f1ece4a18d)
373
2013-10-01T00:00:00
[2169,2187,2188,2189,2190,2191,2213,2214,2215,2216,2238,2798]
["michigan-unemployment-insurance-agency"]
["fast-enterprises","csg-government-solutions"]
["unemployed-michigan-residents-falsely-accused-of-fraud","unemployed-michigan-residents"]
State's use of Michigan Integrated Data Automated System (MiDAS) to adjudicate unemployment benefits claims falsely issued fraud determinations based on un-investigated assumptions, resulting in tens of thousands of false fraud cases over years.
Michigan's Unemployment Benefits Algorithm MiDAS Issued False Fraud Claims to Thousands of People
2,013
un-investigated assumptions
false fraud cases
bias, content, false
ObjectId(6357a047b7c906438c20d050)
376
2016-09-01T00:00:00
[2172,2185,2186,2261,2281]
["realpage"]
["realpage","jeffrey-roper"]
["apartment-renters"]
RealPage’s YieldStar apartment pricing algorithm was reportedly helping landlords push unusually high rents onto tenants, raising fears and criticisms surrounding alleged antitrust behaviors such as artificially inflating price, and stifling competition.
RealPage's Algorithm Pushed Rent Prices High, Allegedly Artificially
2,016
alleged antitrust behaviors
high rents
bias, content, false
ObjectId(6371fabca346c979b1ae42cf)
395
2021-03-02T00:00:00
[2254,2255,2256,2257]
["amazon"]
["netradyne"]
["amazon-delivery-drivers"]
Amazon delivery drivers were forced to consent to algorithmic collection and processing of their location, movement, and biometric data through AI-powered cameras, or be dismissed.
Amazon Forced Deployment of AI-Powered Cameras on Delivery Drivers
2,021
their location
processing
bias, content, false
ObjectId(6347c501dcc7f82dbe8893d5)
361
2018-05-11T00:00:00
[2111]
["amazon"]
["amazon"]
["danielle's-family","amazon-echo-users"]
Amazon Echo misinterpreted a background conversation between a husband and wife as instructions for recording a message and sending it to one of the husband's employees.
Amazon Echo Mistakenly Recorded and Sent Private Conversation to Random Contact
2,018
a message
message
bias, content, false
ObjectId(635c5c81de6aa8bda90be620)
377
2022-10-11T00:00:00
[2174]
["weibo"]
["weibo"]
["weibo","chinese-government"]
Weibo's user moderation model is having difficulty keeping up with shifting user slang in defiance of Chinese state censors.
Weibo Model Had Difficulty Detecting Shifts in Censored Speech
2,022
user slang
defiance
bias, content, false
ObjectId(634299bc63b61b7fa0444163)
354
2020-06-20T00:00:00
[2078,2079,2080,2904,2903]
["uber"]
["uber"]
["uber-drivers"]
Uber was alleged in a lawsuit to have provided incomplete notice about automated decision-making and profiling for drivers such as information about their driving behavior, and use of phone.
Uber Allegedly Violated GDPR by Failing to Provide Sufficient Notice on Automated Profiling for Drivers
2,020
incomplete notice
incomplete notice
bias, content, false
ObjectId(63621e63de6aa8bda92f9f24)
381
2020-10-29T00:00:00
[2217]
["sit-acronis-autonomous"]
["sit-acronis-autonomous"]
["sit-acronis-autonomous"]
An autonomous Roborace car drove itself into a wall in round one of the Season Beta 1.1 race.
Autonomous Roborace Car Drove Directly into a Wall
2,020
a wall
wall
bias, content, false
ObjectId(6342970863b61b7fa043f53e)
353
2019-03-01T00:00:00
[2073,2074,2195,2196]
["tesla"]
["tesla"]
["jeremy-banner","jeremy-banner's-family"]
A Tesla Model 3 driver switched on Autopilot seconds before the crash into the underbelly of a tractor-trailer on a highway in Florida, killing the Tesla driver.
Tesla on Autopilot Crashed into Trailer Truck in Florida, Killing Driver
2,019
the crash
driver
year, risk, crash
ObjectId(6357844d24cf9385ce69f63f)
374
2020-08-13T00:00:00
[2170,2206,2207,2208,2209,2210,2211,2212]
["uk-office-of-qualifications-and-examinations-regulation"]
["uk-office-of-qualifications-and-examinations-regulation"]
["a-level-pupils","gcse-pupils","pupils-in-state-schools","underprivileged-pupils"]
UK Office of Qualifications and Examinations Regulation (Ofqual)'s grade-standardization algorithm providing predicted grades for A level and GCSE qualifications in the UK, Wales, Northern Ireland, and Scotland was reportedly giving grades lower than teachers' assessments, and disproportionately for state schools.
UK Ofqual's Algorithm Disproportionately Provided Lower Grades Than Teachers' Assessments
2,020
A level
assessments
bias, content, false
ObjectId(636218ea6a07890c6a6ea2e6)
380
2014-03-04T00:00:00
[2181,2182,2258,2259,2260]
["facebook"]
["facebook"]
["jewish-people"]
Facebook's automated advertising categories generated using users' declared interests contained anti-Semitic categories such as "Jew hater" and "How to burn Jews" which were listed as fields of study.
Facebook's Auto-Generated Targeting Ad Categories Contained Anti-Semitic Options
2,014
anti-Semitic categories
jew hater
bias, content, false
ObjectId(63429b73fb9dbe61e441cf9e)
357
2019-02-14T00:00:00
[2086,2087,2088]
["openai"]
["openai"]
["openai","people-having-personal-data-in-gpt-2's-training-data"]
OpenAI's GPT-2 reportedly memorized and could regurgitate verbatim instances of training data, including personally identifiable information such as names, emails, twitter handles, and phone numbers.
GPT-2 Able to Recite PII in Training Data
2,019
such
identifiable information
bias, content, false
ObjectId(63627328a7be79b265325ac3)
385
2022-10-04T00:00:00
[2224,2225,2231,2232,2233,2234]
["edmonton-police-service"]
["parabon-nanolabs"]
["black-residents-in-edmonton"]
The Edmonton Police Service (EPS) in Canada released a facial image of a Black male suspect generated by an algorithm using DNA phenotyping, which was denounced by the local community as racial profiling.
Canadian Police's Release of Suspect's AI-Generated Facial Photo Reportedly Reinforced Racial Profiling
2,022
racial profiling
racial profiling
bias, content, false
ObjectId(636b524b23e1c9d9beea48b5)
388
2018-12-01T00:00:00
[2235]
["the-government-in-bahia","bahia's-secretary-of-public-security"]
["huawei"]
["black-people-in-brazil","black-people-in-bahia"]
Facial recognition deployed in a pilot project by the local government of Bahia despite having minimal hit rate reportedly targeted Black and poor people disproportionately.
Facial Recognition Pilot in Bahia Reportedly Targeted Black and Poor People
2,018
minimal hit rate
poor people
bias, content, false
ObjectId(6362264aa7be79b2651b6a2d)
383
2022-10-04T00:00:00
[2220,2223]
["google-home"]
["google-home"]
["black-google-home-mini-users","google-home-mini-users"]
Google Home Mini speaker was reported by users for announcing aloud the previously-censored n-word in a song title.
Google Home Mini Speaker Reportedly Read N-Word in Song Title Aloud
2,022
the previously-censored n
word
bias, content, false
ObjectId(636dfa2e1b6ec4ae9b2296e0)
390
2022-06-28T00:00:00
[2243]
["unknown"]
["unknown"]
["interviewers-of-remote-work-positions","employers-of-remote-work-positions"]
Voice and video deepfakes were reported by FBI Internet Crime Complaint Center (IC3) in complaint reports to have been deployed during online interviews of the candidates for remote-work positions.
Deepfakes Reportedly Deployed in Online Interviews for Remote Work Positions
2,022
complaint reports
voice
bias, content, false
ObjectId(636b4ae550d21acd7f9d55c6)
387
2014-12-22T00:00:00
[2229]
["oracle"]
["oracle"]
["internet-users"]
Oracle's automated system involving algorithmic data processing was alleged in a lawsuit to have been unlawfully collecting personal data from millions of people and violating their privacy rights.
Oracle's Algorithmic Data Processing System Alleged as Unlawful and Violating Privacy Rights
2,014
a lawsuit
lawsuit
bias, content, false
ObjectId(636e237be4c942942295944c)
394
2017-03-15T00:00:00
[2248,2251]
["youtube","twitch","tiktok","instagram"]
["youtube","twitch","tiktok","instagram"]
["youtube-content-creators","twitch-content-creators","tiktok-content-creators","instagram-content-creators"]
TikTok's, YouTube's, Instagram's, and Twitch's use of algorithms to flag certain words devoid of context changed content creators' use of everyday language or discussion about certain topics in fear of their content getting flagged or auto-demonetized by mistake.
Social Media's Automated Word-Flagging without Context Shifted Content Creators' Language Use
2,017
their content
tiktok
bias, content, false
ObjectId(6347c800f149ac829bebd5f0)
362
2021-07-20T00:00:00
[2129]
["facebook"]
["facebook"]
["wny-gardeners","gardening-facebook-groups","facebook-users-in-gardening-groups"]
Facebook's automated system flagged gardening groups' use of "hoe" and violent language against bugs as a violation by mistake.
Facebook's Automated Moderation Flagged Gardening Group's Language Use by Mistake
2,021
a violation
violation
bias, content, false
ObjectId(63622d23f8424658ecc55ba7)
384
2022-10-03T00:00:00
[2221,2222]
["glovo"]
["glovo"]
["sebastian-galassi","sebastian-galassi's-family"]
Delivery company Glovo's automated system sent an email terminating an employee for "non-compliance terms and conditions" after the employee was killed in a car accident while making a delivery on Glovo's behalf.
Glovo Driver in Italy Fired via Automated Email after Being Killed in Accident
2,022
non-compliance terms
delivery
bias, content, false
ObjectId(6347bf83b3a025aa31ca107d)
359
2021-05-23T00:00:00
[2097]
["facebook","instagram","twitter"]
["facebook","instagram","twitter"]
["palestinian-social-media-users","facebook-users","instagram-users","twitter-users","facebook-employees-having-families-affected-by-the-conflict"]
Facebook, Instagram, and Twitter wrongly blocked or restricted millions of pro-Palestinian posts and accounts related to the Israeli-Palestinian conflict, citing errors in their automated content moderation system.
Facebook, Instagram, and Twitter Cited Errors in Automated Systems as Cause for Blocking pro-Palestinian Content on Israeli-Palestinian Conflict
2,021
errors
facebook
bias, content, false
ObjectId(63729e404a5eff12b1d019b7)
396
2018-07-04T00:00:00
[2263]
["uber"]
["uber"]
["transgender-uber-drivers"]
Transgender Uber drivers reported being automatically deactivated from the app due to Real-Time ID Check failing to account for difference in appearance of people undergoing gender transitions.
Transgender Uber Drivers Mistakenly Kicked off App for Appearance Change during Gender Transitions
2,018
gender transitions
appearance
bias, content, false
ObjectId(6342917fb710d4e33cf416e9)
352
2022-09-15T00:00:00
[2070,2076,2093,2426]
["stephan-de-vries"]
["openai","stephan-de-vries"]
["stephan-de-vries"]
Remoteli.io's GPT-3-based Twitter bot was shown being hijacked by Twitter users who redirected it to repeat or generate any phrases.
GPT-3-Based Twitter Bot Hijacked Using Prompt Injection Attacks
2,022
any phrases
phrases
bias, content, false
ObjectId(635769bd84468db9632cd98c)
372
2022-07-22T00:00:00
[2168,2177,2178]
["google"]
["google"]
["google-pixel-6a-users"]
Google Pixel 6a's fingerprint recognition feature was reported by users for security issues, in which phones were mistakenly unlocked by unregistered fingerprints.
Users Reported Security Issues with Google Pixel 6a's Fingerprint Unlocking
2,022
security issues
security issues
bias, content, false
ObjectId(635794898e87db52ebfb01c5)
375
2019-09-29T00:00:00
[2171,2192,2193]
["krungthai-bank"]
["krungthai-bank"]
["thai-citizens","elder-thai-citizens"]
A Thai wallet app failed to recognize people’s faces, resulting in citizens and disproportionately elders unable to sign up for Thai government’s cash handout and co-pay programs or having to wait in long queues at local ATMs for authentication.
Thai Wallet App's Facial Recognition Errors Created Registration Issues for Government Programs
2,019
people’s faces
authentication
bias, content, false
ObjectId(636218985a33233a22f6632e)
379
1992-05-25T00:00:00
[2179,2180]
["pepsi"]
["d.g.-consultores"]
["filipinos"]
Pepsi's number generation system determining daily winners in its Number Fever promotion in the Philippines mistakenly produced a number held by thousands which resulted in riots, deaths, conspiracy theories, and decades of lawsuits.
Error in Pepsi's Number Generation System Led to Decades-Long Damages in the Philippines
1,992
a number
riots
bias, content, false
ObjectId(6362229cf2f56bc79407bf96)
382
2017-11-21T00:00:00
[2219]
["instagram"]
["instagram"]
["molly-rose-russell","the-russell-family","teenage-girls","teenagers"]
Instagram was ruled by a judge to have contributed to the death of a teenage girl in the UK allegedly through its exposure and recommendation of suicide, self-harm, and depressive content.
Instagram's Exposure of Harmful Content Contributed to Teenage Girl’s Suicide
2,017
the death
death
bias, content, false
ObjectId(63627e3fe6a5db6da1a32ac6)
386
2019-07-03T00:00:00
[2227,2228,2252]
["amazon"]
["amazon"]
["amazon-warehouse-workers"]
Amazon’s warehouse worker “time off task" (TOT) tracking system was used to discipline and dismiss workers, falsely assuming workers to have wasted time and failing to account for breaks or equipment issues.
Amazon’s "Time Off Task" System Made False Assumptions about Workers' Time Management
2,019
breaks or equipment issues
tot
bias, content, false
ObjectId(636b5e9802006dadfd7b75df)
389
2022-04-05T00:00:00
[2239,2240,2562]
["cruise"]
["cruise"]
["san-francisco-firefighters","san-francisco-fire-department"]
A fire truck in San Francisco responding to a fire was blocked from passing a doubled-parked garbage truck by a self-driving Cruise car on the opposing lane which stayed put and did not reverse to clear the lane.
Cruise Autonomous Car Blocked Fire Truck Responding to Emergency
2,022
the opposing lane
lane
bias, content, false
ObjectId(637338f62f19ca2c3763fd78)
397
2022-09-11T00:00:00
[2264,2268]
["tiktok"]
["tiktok"]
["young-tiktok-users","tiktok-users","gen-z-tiktok-users"]
TikTok's search recommendations reportedly contained misinformation about political topics bypassing both AI and human content moderation.
Misinformation Reported in TikTok's Search Results Despite Moderation by AI and Human
2,022
both AI
misinformation
bias, content, false
ObjectId(6373414f2f19ca2c37674860)
398
2022-08-15T00:00:00
[2265,2266,2267]
["tesla"]
["tesla"]
["tesla-drivers","horse-drawn-carriages"]
Tesla Autopilot's computer vision system was shown in a video mistaking a horse-drawn carriage for other forms of transport such as a truck, a car, and a human following a car.
Tesla Autopilot Misidentified On-Road Horse-Drawn Carriage
2,022
a car
carriage
bias, content, false
ObjectId(637b0fb0dc7613ede0f49fb0)
399
2022-11-15T00:00:00
[2270,2271,2272,2277]
["meta-ai","meta","facebook"]
["meta-ai","meta","facebook"]
["minority-groups","meta-ai","meta","facebook","minority-groups"]
Meta AI trained and hosted a scientific paper generator that sometimes produced bad science and prohibited queries on topics and groups that are likely to produce offensive or harmful content.
Meta AI's Scientific Paper Generator Reportedly Produced Inaccurate and Harmful Content
2,022
offensive or harmful content
harmful content
bias, content, false
ObjectId(637f9bfc34f2d7279c03dad8)
401
2021-06-03T00:00:00
[2275,2278,2279,2280]
["google"]
["google"]
["the-karnataka-government","kannada-speakers"]
Google's knowledge-graph-powered algorithm showed Kannada in its featured Answer Box when prompted "ugliest language in India," causing outrage from Kannada-speaking people and government.
Kannada Insulted by Google's Featured Answer as "Ugliest Language in India"
2,021
ugliest language
ugliest language
bias, content, false
ObjectId(637f9c0f34f2d7279c03dcd2)
402
2021-04-01T00:00:00
[2276]
["latitude"]
["openai","latitude"]
["latitude"]
Latitude's GPT-3-powered game AI Dungeon was reportedly abused by some players who manipulated its AI to generate sexually explicit stories involving children.
Players Manipulated GPT-3-Powered Game to Generate Sexually Explicit Material Involving Children
2,021
its AI
ai
bias, content, false
ObjectId(6347bfff3f17c3e2099ac5f1)
360
2021-10-15T00:00:00
[2100,2149,2218]
["mcdonald's"]
["mcd-tech-labs","apprente"]
["shannon-carpenter","mcdonald's-customers-residing-in-illinois","mcdonald's-customers"]
McDonald's use of chatbot in its AI drive-through in Chicago was alleged in a lawsuit to have collected and processed voice data without user consent to predict customer information, which violated Illinois Biometric Information Privacy Act (BIPA).
McDonald's AI Drive-Thru Allegedly Collected Biometric Customer Data without Consent, Violating BIPA
2,021
a lawsuit
lawsuit
bias, content, false
ObjectId(6347ca687d1ef715c9a6d78b)
366
2020-09-20T00:00:00
[2140]
["tiktok"]
["tiktok"]
["tiktok-users"]
Many clips showing a suicide evaded TikTok's automated content moderation system allegedly in a coordinated attack, which resulted in exposure of violating content to its users.
Suicide Clips Evaded TikTok's Automated Moderation in Coordinated Attack
2,020
a suicide
suicide
bias, content, false
ObjectId(636e0987411dcebbcc9857ac)
392
2015-06-01T00:00:00
[2245,2249]
["facebook"]
["facebook"]
["facebook-users-speaking-east-african-languages","facebook-users-in-east-africa"]
Facebook's system involving algorithmic content moderation for East African languages was reportedly failing to identify violating content on the platform such as mistakenly classifying non-terrorist content.
Facebook's AI-Supported Moderation Failed to Classify Terrorist Content in East African Languages
2,015
the platform
content
bias, content, false
ObjectId(6386e622b48fdf02a11460a8)
407
2016-02-03T00:00:00
[2289]
["uber"]
["uber"]
["poor-neighborhoods","neighborhoods-of-color"]
Uber's surge-pricing algorithm which adjusts prices to influence car availability inadvertently caused better service offering such as shorter wait times for majority white neighborhoods.
Uber's Surge Pricing Reportedly Offered Disproportionate Service Quality along Racial Lines
2,016
shorter wait times
pricing algorithm
bias, content, false
ObjectId(63806c7c19b54579646d7d3d)
404
2019-06-25T00:00:00
[2284,2286]
["rock-hill-schools","pinecrest-academy-horizon"]
["sound-intelligence"]
["students","rock-hill-school-students","pinecrest-academy-horizon-students"]
Sound Intelligence's "aggression detection" algorithm deployed by schools reportedly contained high rates of false positive, misclassifying laughing, coughing, cheering, and loud discussions.
Sound Intelligence's Aggression Detector Misidentified Innocuous Sounds
2,019
high rates
aggression detection
bias, content, false
ObjectId(6381c9b634f2d7279c4fd523)
406
2015-07-15T00:00:00
[2288]
["facebook"]
["facebook"]
["pseudonymized-psychiatrist's-patients","pseudonymized-psychiatrist","patients","healthcare-providers"]
Facebook's "People You May Know" (PYMK) feature was reported by a psychiatrist for recommending her patients as friends through recommendations, violating patients' privacy and confidentiality.
Facebook's Friend Suggestion Feature Recommends Patients of Psychiatrist to Each Other
2,015
" (PYMK) feature
pymk
bias, content, false
ObjectId(6386e641c860d9983e13d10b)
408
2017-04-15T00:00:00
[2290]
["facebook"]
["facebook"]
["sex-workers-using-facebook"]
Facebook's "People You May Know" feature reportedly outed sex workers by recommending clients to their personal accounts or family members to their business accounts with no option to opt out.
Facebook Reportedly Outed Sex Workers through Friend Recommendations
2,017
no option
sex workers
bias, content, false
ObjectId(6386f266c79c035dcaa8a3c2)
409
2013-09-13T00:00:00
[2309,2411,2412]
["university-of-north-carolina-wilmington","karl-ricanek","gayathri-mahalingam"]
["university-of-north-carolina-wilmington","karl-ricanek","gayathri-mahalingam"]
["transgender-youtubers","transgender-people"]
YouTube videos of transgender people used by researchers to study facial recognition during gender transitions were used and distributed without permission.
Facial Recognition Researchers Used YouTube Videos of Transgender People without Consent
2,013
facial recognition
permission
system, recognition, facial
ObjectId(6381b80fb48fdf02a16754bd)
405
2018-11-28T00:00:00
[2285,2287]
["schufa-holding-ag"]
["schufa-holding-ag"]
["young-men-having-credit-scores","people-scored-on-old-scoring-versions","people-changing-addresses-frequently"]
Creditworthiness Schufa scores in Germany reportedly privileged older and female consumers, and people who changed addresses less frequently, and were unreliable depending on scoring version.
Schufa Credit Scoring in Germany Reported for Unreliable and Imbalanced Scores
2,018
scoring version
addresses
bias, content, false