_id
stringlengths
34
34
incident_id
int64
1
524
date
timestamp[ns]
reports
stringlengths
4
191
Alleged deployer of AI system
stringlengths
7
214
Alleged developer of AI system
stringlengths
7
127
Alleged harmed or nearly harmed parties
stringlengths
8
371
description
stringlengths
50
371
title
stringlengths
6
170
year
int64
1.98k
2.02k
spacy_negative_outcomes
stringlengths
3
54
keybert_negative_outcomes
stringlengths
2
41
Cluster
stringclasses
5 values
ObjectId(625763e7343edc875fe63a67)
106
2020-12-23T00:00:00
[1409,1416,1417,1418,1419,1420,1421,1422,1423,1424,1425,2034,2356]
["facebook-messenger"]
["scatter-lab"]
["korean-facebook-messenger-users","korean-people-of-gender-minorities","korean-people-with-disabilities"]
A Korean interactive chatbot was shown in screenshots to have used derogatory and bigoted language when asked about lesbians, Black people, and people with disabilities.
Korean Chatbot Luda Made Offensive Remarks towards Minority Groups
2,020
derogatory and bigoted language
bigoted language
bias, content, false
ObjectId(625763e7343edc875fe63a69)
108
2021-07-10T00:00:00
[1411,1503,1537]
["riverside-arena-skating-rink"]
["unknown"]
["lamya-robinson","black-livonia-residents"]
A Black teenager living in Livonia, Michigan was incorrectly stopped from entering a roller skating rink after its facial-recognition cameras misidentified her as another person who had been previously banned for starting a skirmish with other skaters.
Skating Rink’s Facial Recognition Cameras Misidentified Black Teenager as Banned Troublemaker
2,021
another person
black teenager
bias, content, false
ObjectId(625763e8343edc875fe63a70)
115
2020-07-28T00:00:00
[1440,1472,2204]
["genderify"]
["genderify"]
["genderify-customers","gender-minority-groups"]
A company's AI predicting a person's gender based on their name, email address, or username was reported by its users to show biased and inaccurate results.
Genderify’s AI to Predict a Person’s Gender Revealed by Free API Users to Exhibit Bias
2,020
biased and inaccurate results
inaccurate results
bias, content, false
ObjectId(625763e8343edc875fe63a6f)
114
2018-07-26T00:00:00
[1439]
["amazon"]
["amazon"]
["rekognition-users","arrested-people"]
Rekognition's face comparison feature was shown by the ACLU to have misidentified members of congress, and particularly members of colors, as other people who have been arrested using a mugshot database built on publicly available arrest photos.
Amazon's Rekognition Falsely Matched Members of Congress to Mugshots
2,018
misidentified members
rekognition
bias, content, false
ObjectId(625763e7343edc875fe63a68)
107
2018-07-20T00:00:00
[1410,1928]
["none"]
["huawei","megvii","sensetime","alibaba","baibu"]
["uyghur-people"]
Various Chinese firms were revealed by patent applications to have developed facial recognition capable of detecting people by race, which critics feared would enable persecution and discrimination of Uyghur Muslims.
Chinese Tech Firms Allegedly Developed Facial Recognition to Identify People by Race, Targeting Uyghur Muslims
2,018
facial recognition
discrimination
system, recognition, facial
ObjectId(625763e8343edc875fe63a74)
119
2021-08-03T00:00:00
[1444,1800,1801,1802]
["xsolla"]
["unknown"]
["xsolla-employees"]
Xsolla CEO fired more than a hundred employees from his company in Perm, Russia, based on big data analysis of their remote digitized-work activity, which critics said was violating employee's privacy, outdated, and extremely ineffective.
Xsolla Employees Fired by CEO Allegedly via Big Data Analytics of Work Activities
2,021
ineffective
company
bias, content, false
ObjectId(625763e7343edc875fe63a66)
105
2019-08-24T00:00:00
[1408]
["tesla"]
["tesla"]
["jovani-maldonado","benjamin-maldonado","california-public"]
A Tesla Model 3 on Autopilot mode crashed into a pickup on a California freeway, where data and video from the company showed neither Autopilot nor the driver slowing the vehicle until seconds before the crash.
Tesla Model 3 on Autopilot Crashed into a Ford Explorer Pickup, Killing a Fifteen-Year-Old in California
2,019
the crash
crash
year, risk, crash
ObjectId(625763e6343edc875fe63a60)
99
2012-01-01T00:00:00
[1402]
["university-of-massachusetts-amherst","university-of-wisconsin-milwaukee","university-of-houston","texas-aandm-university","georgia-state-university","more-than-500-colleges"]
["eab"]
["black-college-students","latinx-college-students","indigenous-students"]
Several major universities are using a tool that uses race as one factor to predict student success.
Major Universities Are Using Race as a “High Impact Predictor” of Student Success
2,012
one factor
factor
bias, content, false
ObjectId(625763e8343edc875fe63a76)
121
2020-03-27T00:00:00
[2106,2105,2104,1447]
["tripoli-based-government"]
["stm"]
["libyan-soldiers"]
In Libya, a Turkish-made Kargu-2 aerial drone powered by a computer vision model was allegedly used remotely by forces backed by the Tripoli-based government to track down and attack enemies as they were running from rocket attacks.
Autonomous Kargu-2 Drone Allegedly Remotely Used to Hunt down Libyan Soldiers
2,020
rocket attacks
tripoli
bias, content, false
ObjectId(625763e7343edc875fe63a6a)
109
2017-01-01T00:00:00
[1412]
["pimeyes"]
["pimeyes"]
["internet-users"]
PimEyes offered its subscription-based AI service to anyone in the public to search for matching facial images across the internet, which critics said lacked public oversight and government rules to prevent itself from misuse such as stalking women.
PimEyes's Facial Recognition AI Allegedly Lacked Safeguards to Prevent Itself from Being Abused
2,017
facial images
pimeyes
system, recognition, facial
ObjectId(625763e6343edc875fe63a62)
101
2018-09-01T00:00:00
[1404,1575,1863,2570,2805,2845]
["dutch-tax-authority"]
["unknown"]
["dutch-tax-authority","dutch-families"]
A childcare benefits system in the Netherlands falsely accused thousands of families of fraud, in part due to an algorithm that treated having a second nationality as a risk factor.
Dutch Families Wrongfully Accused of Tax Fraud Due to Discriminatory Algorithm
2,018
a risk factor
fraud
bias, content, false
ObjectId(625763e7343edc875fe63a64)
103
2020-09-18T00:00:00
[1406,1527,1528,2145,2241]
["twitter"]
["twitter"]
["twitter-users","twitter-non-white-users","twitter-non-male-users"]
Twitter's photo cropping algorithm was revealed by researchers to favor white and women faces in photos containing multiple faces, prompting the company to stop its use on mobile platform.
Twitter’s Image Cropping Tool Allegedly Showed Gender and Racial Bias
2,020
its use
multiple faces
bias, content, false
ObjectId(625763e8343edc875fe63a6e)
113
2020-06-27T00:00:00
[1438]
["facebook"]
["facebook"]
["black-people","facebook-users"]
Facebook's AI mislabeled video featuring Black men as a video about "primates," resulting in an offensive prompt message for users who watched the video.
Facebook's AI Put "Primates" Label on Video Featuring Black Men
2,020
an offensive prompt message
offensive prompt message
bias, content, false
ObjectId(625763e9343edc875fe63a77)
122
2015-06-14T00:00:00
[1448]
["facebook"]
["facebook"]
["facebook-users"]
Facebook’s initial version of the its Tag Suggestions feature where users were offered suggestions about the identity of people's faces in photos allegedly stored biometric data without consent, violating the Illinois Biometric Information Privacy Act.
Facebook’s "Tag Suggestions" Allegedly Stored Biometric Data without User Consent
2,015
the identity
consent
bias, content, false
ObjectId(625763e7343edc875fe63a63)
102
2020-03-23T00:00:00
[1405,1523]
["microsoft","ibm","google","apple","amazon"]
["microsoft","ibm","google","apple","amazon"]
["black-people"]
A study found that voice recognition tools from Apple, Amazon, Google, IBM, and Microsoft disproportionately made errors when transcribing black speakers.
Personal voice assistants struggle with black voices, new study shows
2,020
black speakers
errors
bias, content, false
ObjectId(625763e8343edc875fe63a6d)
112
2012-10-09T00:00:00
[1434,1436,1432,1433,1810,2495,2496,1435,1821,2250,2623,2831]
["troy-police-department","syracuse-police-department","san-francisco-police-department","san-antonio-police-department","new-york-city-police-department","fall-river-police-department","chicago-police-department"]
["shotspotter"]
["troy-residents","syracuse-residents","san-francisco-residents","san-antonio-residents","new-york-city-residents","fall-river-residents","chicago-residents","troy-police-department","syracuse-police-department","san-francisco-police-department","san-antonio-police-department","new-york-city-police-department","fall-river-police-department","chicago-police-department"]
ShotSpotter algorithmic systems locating gunshots were reported by police departments for containing high false positive rates and wasting police resources, prompting discontinuation.
Police Departments Reported ShotSpotter as Unreliable and Wasteful
2,012
high false positive rates
gunshots
bias, content, false
ObjectId(625763e6343edc875fe63a61)
100
2021-03-17T00:00:00
[1403]
["french-welfare-offices"]
["unknown"]
["lucie-inland"]
A French welfare office using software to automatically evaluate cases incorrectly notified a woman receiving benefits that she owed €542.
How French welfare services are creating ‘robo-debt’
2,021
a woman
cases
bias, content, false
ObjectId(625763e8343edc875fe63a71)
116
2021-09-20T00:00:00
[1441,1803]
["amazon"]
["netradyne"]
["amazon-delivery-drivers","amazon-workers"]
Amazon's automated performance evaluation system involving AI-powered cameras incorrectly punished delivery drivers for non-existent mistakes, impacting their chances for bonuses and rewards.
Amazon's AI Cameras Incorrectly Penalized Delivery Drivers for Mistakes They Did Not Make
2,021
non-existent mistakes
ai
bias, content, false
ObjectId(625763e8343edc875fe63a73)
118
2020-08-06T00:00:00
[1443,2009,2010]
["openai"]
["openai"]
["muslims"]
Users and researchers revealed generative AI GPT-3 associating Muslims to violence in prompts, resulting in disturbingly racist and explicit outputs such as casting Muslim actor as a terrorist.
OpenAI's GPT-3 Associated Muslims with Violence
2,020
disturbingly racist and explicit outputs
violence
bias, content, false
ObjectId(625763e8343edc875fe63a75)
120
2020-09-01T00:00:00
[1445]
["unknown"]
["murat-ayfer","openai"]
["reddit-users"]
Philosopher AI, a GPT-3-powered controversial text generator, was allegedly used by an anonymous actor on AskReddit subreddit, whose posts featured a mixture of harmless stories, conspiracy theories, and sensitive topic discussions.
Philosophy AI Allegedly Used To Generate Mixture of Innocent and Harmful Reddit Posts
2,020
a mixture
conspiracy theories
bias, content, false
ObjectId(625763e8343edc875fe63a72)
117
2020-02-24T00:00:00
[1442,2019,2020,2021]
["tiktok"]
["tiktok"]
["tiktok-users","tiktok-content-creators"]
TikTok's "Suggested Accounts" recommendations allegedly reinforced racial bias despite not basing recommendations on race or creators' profile photo.
TikTok's "Suggested Accounts" Algorithm Allegedly Reinforced Racial Bias through Feedback Loops
2,020
racial bias
racial bias
bias, content, false
ObjectId(625763e9343edc875fe63a7e)
129
2021-03-01T00:00:00
[1462]
["facebook"]
["facebook"]
["facebook-users"]
Facebook's automated moderation tools were shown by internal documents performing incomparably to human moderators, and accounting for only a small fraction of hate speech, violence, and incitement content removal.
Facebook's Automated Tools Failed to Adequately Remove Hate Speech, Violence, and Incitement
2,021
hate speech
violence
bias, content, false
ObjectId(625763ea343edc875fe63a89)
140
2020-06-01T00:00:00
[1478]
["university-of-toronto"]
["proctoru"]
["university-of-toronto-bipoc-students"]
An exam monitoring service used by the University of Toronto was alleged by its students to have provided discriminatory check-in experiences via its facial recognition's failure to verify passport photo, disproportionately enhancing disadvantaging stress level for BIPOC students.
ProctorU’s Identity Verification and Exam Monitoring Systems Provided Allegedly Discriminatory Experiences for BIPOC Students
2,020
its facial recognition's failure
discriminatory check
system, recognition, facial
ObjectId(625763eb343edc875fe63a8f)
146
2021-10-22T00:00:00
[1494,1495,1502]
["allen-institute-for-ai"]
["allen-institute-for-ai"]
["minority-groups"]
A publicly accessible research model that was trained via Reddit threads showed racially biased advice on moral dilemmas, allegedly demonstrating limitations of language-based models trained on moral judgments.
Research Prototype AI, Delphi, Reportedly Gave Racially Biased Answers on Ethics
2,021
racially biased advice
moral dilemmas
bias, content, false
ObjectId(625763ea343edc875fe63a86)
137
2021-01-11T00:00:00
[1474]
["israeli-tax-authority"]
["israeli-tax-authority"]
["moshe-har-shemesh","israeli-people-having-tax-fines"]
An Israeli farmer was imposed a computer generated fine by the tax authority, who allegedly were not able to explain its calculation, and refused to disclose the program and its source code.
Israeli Tax Authority Employed Opaque Algorithm to Impose Fines, Reportedly Refusing to Provide an Explanation for Amount Calculation to a Farmer
2,021
its calculation
program
bias, content, false
ObjectId(625763eb343edc875fe63a8d)
144
2020-06-28T00:00:00
[1483,1979,1980,2042,2043,2134]
["youtube"]
["youtube"]
["antonio-radic","youtube-chess-content-creators","youtube-users"]
YouTube's AI-powered hate speech detection system falsely flagged chess content and banned chess creators allegedly due to its misinterpretation of strategy language such as "black," "white," and "attack" as harmful and dangerous.
YouTube's AI Mistakenly Banned Chess Channel over Chess Language Misinterpretation
2,020
its misinterpretation
hate speech detection system
bias, content, false
ObjectId(625763eb343edc875fe63a91)
148
2021-11-21T00:00:00
[1499]
["accessibe","accessus.ai","allyable","userway","maxaccess.io"]
["accessibe","accessus.ai","allyable","userway","maxaccess.io"]
["internet-users-with-disabilities","web-accessibility-vendors'-customers"]
AI-powered web accessibility vendors allegedly overstated to customers about their products' utility for people with disabilities, falsely claiming to deliver automated compliance solutions.
Web Accessibility Vendors Allegedly Falsely Claimed to Provide Compliance Using AI
2,021
automated compliance solutions
disabilities
bias, content, false
ObjectId(625763ec343edc875fe63a9d)
160
2021-12-26T00:00:00
[1520,1521,2381]
["amazon"]
["amazon"]
["kristin-livdahl's-daughter","amazon-echo-customers","children-using-alexa"]
Amazon’s voice assistant Alexa suggested “the penny challenge,” which involves dangerously touching a coin to the prongs of a half-exposed plug, when a ten-year-old girl asked for a challenge to do.
Alexa Recommended Dangerous TikTok Challenge to Ten-Year-Old Girl
2,021
a challenge
challenge
bias, content, false
ObjectId(625763ec343edc875fe63a95)
152
2021-07-13T00:00:00
[1509,1510]
["softbank"]
["aldebaran","softbank-robotics"]
["softbank"]
SoftBank's robot allegedly kept making mechanical errors, taking unplanned breaks, failing to recognize previously-met people, and breaking down during practice runs.
SoftBank's Humanoid Robot, Pepper, Reportedly Frequently Made Errors, Prompting Dismissal
2,021
mechanical errors
mechanical errors
bias, content, false
ObjectId(625763ed343edc875fe63a9e)
161
2019-04-03T00:00:00
[1530,2138,2139]
["facebook"]
["facebook"]
["female-facebook-users","black-facebook-users","male-facebook-users"]
Facebook's housing and employment ad delivery process allegedly resulted in skews in exposure for some users along demographic lines such as gender and racial identity.
Facebook's Ad Delivery Reportedly Excluded Audience along Racial and Gender Lines
2,019
racial identity
skews
bias, content, false
ObjectId(625763ed343edc875fe63aa1)
164
2018-10-01T00:00:00
[1534]
["facebook"]
["facebook"]
["facebook-users","facebook-content-creators"]
After the “News Feed” algorithm had been overhauled to boost engagement between friends and family in early 2018, its heavy weighting of re-shared content was alleged found by company researchers to have pushed content creators to reorient their posts towards outrage and sensationalism, causing a proliferation of misinformation, toxicity, and violent content.
Facebook "News Feed" Allegedly Boosted Misinformation and Violating Content Following Use of MSI Metric
2,018
violent content
outrage
bias, content, false
ObjectId(625763ed343edc875fe63aa5)
168
2022-03-01T00:00:00
[1540,1541]
["facebook","linkedin","youtube","twitter","netflix"]
["facebook","linkedin","youtube","twitter","netflix"]
["facebook-users","linkedin-users","youtube-users","twitter-users","netflix-users"]
Collaborative filtering prone to popularity bias, resulting in overrepresentation of popular items in the recommendation outputs.
Collaborative Filtering Prone to Popularity Bias, Resulting in Overrepresentation of Popular Items in the Recommendation Outputs
2,022
popularity bias
popularity bias
bias, content, false
ObjectId(625763ea343edc875fe63a82)
133
2020-12-15T00:00:00
[1468]
["tiktok"]
["tiktok"]
["tiktok-content-creators-of-marginalized-groups"]
TikTok's automated content reporting system was allegedly abused by online trolls to intentionally misreport content created by users of marginalized groups.
Online Trolls Allegedly Abused TikTok’s Automated Content Reporting System to Discriminate against Marginalized Creators
2,020
marginalized groups
misreport content
bias, content, false
ObjectId(625763e9343edc875fe63a80)
131
2020-12-04T00:00:00
[1465,1771]
["california-bar's-committee-of-bar-examiners"]
["examsoft"]
["california-bar-exam-takers","flagged-california-bar-exam-takers"]
The proctoring algorithm used in a California bar exam cited a third of thousands of applicants as cheaters, resulting in allegations where exam takers were instructed to prove otherwise without seeing their incriminating video evidence.
Proctoring Algorithm in Online California Bar Exam Flagged an Unusually High Number of Alleged Cheaters
2,020
a third
cheaters
bias, content, false
ObjectId(625763ec343edc875fe63a96)
153
2019-12-29T00:00:00
[1511,1729,1763,2514]
["tesla"]
["tesla"]
["gilberto-alcazar-lopez","maria-guadalupe-nieves-lopez"]
In 2019, a Tesla Model S driver on Autopilot mode reportedly went through a red light and crashed into a Honda Civic, killing two people in Gardena, Los Angeles.
Tesla Driver on Autopilot Ran a Red Light, Crashing into a Car and Killing Two People in Los Angeles
2,019
two people
people
bias, content, false
ObjectId(625763eb343edc875fe63a8e)
145
2021-07-23T00:00:00
[1485,1504,1529]
["tesla"]
["tesla"]
["tesla-drivers"]
Tesla's Autopilot was shown on video by its owner mistaking the moon for a yellow stop light, allegedly causing the vehicle to keep slowing down.
Tesla's Autopilot Misidentified the Moon as Yellow Stop Light
2,021
the vehicle
yellow stop
bias, content, false
ObjectId(625763ea343edc875fe63a8a)
141
2021-02-05T00:00:00
[1479,1480]
["instagram"]
["instagram"]
["sennett-devermont","beverly-hills-citizens"]
A police officer in Beverly Hills played copyrighted music on his phone when realizing that his interactions were being recorded on a livestream, allegedly hoping the Instagram's automated copyright detection system to end or mute the stream.
California Police Turned on Music to Allegedly Trigger Instagram’s DCMA to Avoid Being Live-Streamed
2,021
his interactions
music
bias, content, false
ObjectId(625763ea343edc875fe63a83)
134
2020-12-25T00:00:00
[1469,1951]
["fuzhou-zhongfang-marlboro-mall"]
["unknown"]
["fuzhou-zhongfang-marlboro-mall-goers"]
A shopping guide robot deployed by the Fuzhou Zhongfang Marlboro Mall was shown on video allegedly walking to the escalator by itself, falling down, and knocking over passengers, which prompted its suspension.
Robot in Chinese Shopping Mall Fell off the Escalator, Knocking down Passengers
2,020
its suspension
suspension
bias, content, false
ObjectId(625763e9343edc875fe63a7a)
125
2020-09-29T00:00:00
[1451,1452,1460]
["amazon"]
["amazon"]
["amazon-fulfillment-center-workers"]
Amazon’s robotic fulfillment centers have higher serious injury rates.
Amazon’s Robotic Fulfillment Centers Have Higher Serious Injury Rates
2,020
higher serious injury rates
higher serious injury rates
bias, content, false
ObjectId(625763ea343edc875fe63a85)
136
2020-12-06T00:00:00
[1471]
["brand-safety-tech-firms"]
["none"]
["news-sites"]
Brand safety tech firms falsely claimed use of AI, blocking ads using simple keyword lists.
Brand Safety Tech Firms Falsely Claimed Use of AI, Blocking Ads Using Simple Keyword Lists
2,020
blocking
use
bias, content, false
ObjectId(625763ea343edc875fe63a87)
138
2020-01-21T00:00:00
[1475,1505,1555,1556,2442,2434]
["university-of-illinois"]
["proctorio"]
["university-of-illinois-students-of-color","university-of-illinois-students"]
Proctorio's remote-testing software were reported by students at the University of Illinois Urbana-Champaign for issues regarding privacy, accessibility, differential performance on darker-skinned students.
Proctorio's Alleged Privacy, Accessibility, and Discrimination Issues Prompted Suspension by University of Illinois
2,020
differential performance
proctorio
bias, content, false
ObjectId(625763ea343edc875fe63a81)
132
2020-12-27T00:00:00
[1466]
["tiktok"]
["tiktok"]
["tiktok-users","tiktok-users-under-18-years-old"]
Videos promoting eating disorders evaded TikTok's automated violation detection system without difficulty via common misspellings of search terms, bypassing its ban of violating hashtags such as "proana" and "anorexia".
TikTok’s Content Moderation Allegedly Failed to Adequately Take down Videos Promoting Eating Disorders
2,020
its ban
ban
bias, content, false
ObjectId(625763ea343edc875fe63a84)
135
2012-12-01T00:00:00
[1470,1871]
["university-of-texas-at-austin's-department-of-computer-science"]
["university-of-texas-at-austin-researchers"]
["university-of-texas-at-austin-phd-applicants-of-marginalized-groups"]
The University of Texas at Austin's Department of Computer Science's assistive algorithm to assess PhD applicants "GRADE" raised concerns among faculty about worsening historical inequalities for marginalized candidates, prompting its suspension.
UT Austin GRADE Algorithm Allegedly Reinforced Historical Inequalities
2,012
worsening historical inequalities
suspension
bias, content, false
ObjectId(625763ec343edc875fe63a98)
155
2021-12-27T00:00:00
[1513,1514]
["google-maps"]
["google-maps"]
["google-maps-users-traveling-in-sierra-nevada","google-maps-users-traveling-in-the-mountains"]
Lake Tahoe travelers were allegedly guided by Google Maps into hazardous shortcuts in the mountains during a snowstorm.
Google Maps Allegedly Directed Sierra Nevada Travelers to Dangerous Roads amid Winter Storm
2,021
hazardous shortcuts
hazardous shortcuts
bias, content, false
ObjectId(625763ed343edc875fe63a9f)
162
2014-01-01T00:00:00
[1531]
["ets"]
["ets"]
["uk-ets-past-test-takers","uk-ets-test-takers","uk-home-office"]
International testing organization ETS admits voice recognition as evidence of cheating for thousands of previous TOEIC test-takers that reportedly included wrongfully accused people, causing them to be deported without an appeal process or seeing their incriminating evidence.
ETS Used Allegedly Flawed Voice Recognition Evidence to Accuse and Assess Scale of Cheating, Causing Thousands to be Deported from the UK
2,014
an appeal process
appeal process
bias, content, false
ObjectId(625763ee343edc875fe63aa8)
171
2021-10-18T00:00:00
[1549]
["bath-government"]
["unknown"]
["paula-knight","bath-officials","uk-public"]
A Bath resident was wrongly fined by the local officials because an automated license plate recognition camera misread the text on her shirt as a license plate number.
Traffic Camera Misread Text on Pedestrian's Shirt as License Plate, Causing UK Officials to Issue Fine to an Unrelated Person
2,021
the text
text
bias, content, false
ObjectId(625763ec343edc875fe63a94)
151
2021-10-28T00:00:00
[1507,1508,1703,1704,1705,1706,1707,1708,1709,1710]
["pony.ai"]
["pony.ai"]
["san-francisco-city-government"]
A Pony.ai vehicle operating in autonomous mode crashed into a center divider and a traffic sign in San Francisco, prompting a regulator to suspend the driverless testing permit for the startup.
California Regulator Suspended Pony.ai's Driverless Testing Permit Following a Non-Fatal Collision
2,021
a center divider
center divider
bias, content, false
ObjectId(625763e9343edc875fe63a78)
123
2021-08-01T00:00:00
[1449,2651,2705]
["university-of-michigan-hospital"]
["epic-systems"]
["sepsis-patients"]
Epic System's sepsis prediction algorithms was shown by investigators at the University of Michigan Hospital to have high rates of false positives and false negatives, allegedly delivering inaccurate and irrelevant information on patients, contrasting sharply with their published claims.
Epic Systems’s Sepsis Prediction Algorithms Revealed to Have High Error Rates on Seriously Ill Patients
2,021
inaccurate and irrelevant information
false negatives
bias, content, false
ObjectId(625763ec343edc875fe63a99)
156
2022-02-04T00:00:00
[1515,2197]
["amazon"]
["amazon"]
["people-attempting-suicides"]
Despite complaints notifying Amazon about the sale of various products that had been used to aid suicide attempts, its recommendation system reportedly continued selling them and suggesting their frequently bought-together items.
Amazon Reportedly Sold Products and Recommended Frequently Bought Together Items That Aid Suicide Attempts
2,022
suicide attempts
suicide attempts
bias, content, false
ObjectId(625763eb343edc875fe63a8c)
143
2021-02-16T00:00:00
[1482]
["facebook","twitter"]
["facebook","twitter"]
["facebook-users-of-small-language-groups","twitter-users-of-small-language-groups"]
Facebook's and Twitter were not able to sufficiently moderate content of small language groups such as the Balkan languages using AI, allegedly due to the lack of investment in human moderation and difficulty in AI-solution design for the languages.
Facebook’s and Twitter's Automated Content Moderation Reportedly Failed to Effectively Enforce Violation Rules for Small Language Groups
2,021
the lack
ai
bias, content, false
ObjectId(625763ec343edc875fe63a9a)
157
2021-03-15T00:00:00
[1516]
["amazon"]
["amazon"]
["ans-rana","amazon-workers","amazon-delivery-drivers"]
A lawsuit cited Amazon as liable in a crash involving its delivery driver, alleging that Amazon’s AI-powered driver monitoring system pushed drivers to prioritize speed over safety.
Amazon's Monitoring System Allegedly Pushed Delivery Drivers to Prioritize Speed over Safety, Leading to Crash
2,021
a crash
crash
year, risk, crash
ObjectId(625763ee343edc875fe63aa7)
170
2003-06-01T00:00:00
[1546,1547,1548]
["target"]
["target"]
["target-customers"]
Target recommended maternity-related items to a family in Atlanta via ads, allegedly predicting their teenage daughter’s pregnancy before her father did, although critics have called into question the predictability of the algorithm and the authenticity of its claims.
Target Suggested Maternity-Related Advertisements to a Teenage Girl's Home, Allegedly Correctly Predicting Her Pregnancy via Algorithm
2,003
its claims
target
bias, content, false
ObjectId(625763ea343edc875fe63a88)
139
2021-01-21T00:00:00
[1476,1477]
["amazon"]
["amazon"]
["amazon-customers"]
Evidence of the "filter-bubble effect" were found by vaccine-misinformation researchers in Amazon's recommendations, where its algorithms presented users who performed actions on misinformative products with more misinfomative products.
Amazon’s Search and Recommendation Algorithms Found by Auditors to Have Boosted Products That Contained Vaccine Misinformation
2,021
misinformative products
misinformative products
bias, content, false
ObjectId(625763eb343edc875fe63a8b)
142
2021-02-11T00:00:00
[1481]
["facebook","instagram"]
["facebook","instagram"]
["facebook-users-of-disabilities","adaptive-fashion-retailers"]
Facebook platforms' automated ad moderation system falsely classified adaptive fashion products as medical and health care products and services, resulting in regular bans and appeals faced by their retailers.
Facebook’s Advertisement Moderation System Routinely Misidentified Adaptive Fashion Products as Medical Equipment and Blocked Their Sellers
2,021
regular bans
facebook platforms
bias, content, false
ObjectId(625763eb343edc875fe63a93)
150
2018-07-21T00:00:00
[1506]
["natural-cycles"]
["natural-cycles"]
["natural-cycles-users","women"]
Some women using the contraceptive app, Natural Cycles, reported unwanted pregnancies, revealing its algorithm's difficulties in mapping menstrual cycles.
Swedish Contraceptive App, Natural Cycles, Allegedly Failed to Correctly Map Menstrual Cycle
2,018
unwanted pregnancies
unwanted pregnancies
bias, content, false
ObjectId(625763ed343edc875fe63aa6)
169
2018-08-15T00:00:00
[1544,1545]
["facebook","meta"]
["facebook","meta"]
["rohingya-people","rohingya-facebook-users","myanmar-public","facebook-users-in-myanmar","burmese-speaking-facebook-users"]
Facebook allegedly did not adequately remove anti-Rohingya hate speech, some of which was extremely violent and dehumanizing, on its platform, contributing to the violence faced by Rohingya communities in Myanmar.
Facebook Allegedly Failed to Police Anti-Rohingya Hate Speech Content That Contributed to Violence in Myanmar
2,018
the violence
violence
bias, content, false
ObjectId(625763eb343edc875fe63a90)
147
2020-01-01T00:00:00
[1496,1497]
["scammers"]
["unknown"]
["hong-kong-bank-manager"]
In early 2020, fraudsters reportedly allegedly deepfaked the voice of a company's director, demanding a bank manager in Hong Kong to authorize a $35M transfer.
Hong Kong Bank Manager Swindled by Fraudsters Using Deepfaked Voice of Company Director
2,020
the voice
voice
year, own, voice
ObjectId(625763e9343edc875fe63a7b)
126
2021-07-16T00:00:00
[1453,1454,1455,1532]
["ocado"]
["ocado"]
["ocado"]
A collision involving three robots at an Ocado's warehouse in Erith, UK, resulting in a fire but no reports of injuries.
Three Robots Collided, Sparking Fire in a Grocer's Warehouse in UK
2,021
A collision
collision
bias, content, false
ObjectId(625763e9343edc875fe63a7d)
128
2017-08-01T00:00:00
[1459,1818]
["tesla"]
["tesla"]
["eric-horvitz","tesla-drivers"]
A Tesla Sedan operating on Autopilot mode was not able to center itself on the road and drove over a yellow dividing curb in Redmond, Washington, causing minor damage to the vehicle’s rear suspension.
Tesla Sedan on Autopilot Reportedly Drove Over Dividing Curb in Washington, Resulting in Minor Vehicle Damage
2,017
minor damage
road
bias, content, false
ObjectId(625763ec343edc875fe63a97)
154
2022-01-26T00:00:00
[1512]
["us-department-of-justice"]
["us-department-of-justice"]
["inmates-of-color"]
Department of Justice’s inmate-recidivism risk assessment tool was reported to have produced racially uneven results, misclassifying risk levels for inmates of color.
Justice Department’s Recidivism Risk Algorithm PATTERN Allegedly Caused Persistent Disparities Along Racial Lines
2,022
racially uneven results
uneven results
bias, content, false
ObjectId(625763ed343edc875fe63aa2)
165
2020-06-20T00:00:00
[1536,2781]
["duke-researchers"]
["duke-researchers"]
["people-having-non-caucasian-facial-features"]
Image upscaling tool PULSE powered by NVIDIA's StyleGAN reportedly generated faces with Caucasian features more often, although AI academics, engineers, and researchers were not in agreement about where the source of bias was.
Image Upscaling Algorithm PULSE Allegedly Produced Facial Images with Caucasian Features More Often
2,020
the source
bias
bias, content, false
ObjectId(625763ec343edc875fe63a9b)
158
2021-02-01T00:00:00
[1517]
["unknown"]
["unknown"]
["amaya-ross","black-students","black-test-takers"]
A Black student's face was not recognized by the remote-proctoring software during check-in of a lab quiz, causing her to excessively change her environments for it to work as intended.
Facial Recognition in Remote Learning Software Reportedly Failed to Recognize a Black Student’s Face
2,021
her environments
check
bias, content, false
ObjectId(625763eb343edc875fe63a92)
149
2021-11-02T00:00:00
[1500,1501,1890]
["zillow"]
["zillow-offers"]
["zillow-offers-staff","zillow"]
Zillow's AI-powered predictive pricing tool Zestimate was allegedly not able to accurately forecast housing prices three to six months in advance due to rapid market changes, prompting division shutdown and layoff of a few thousand employees.
Zillow Shut Down Zillow Offers Division Allegedly Due to Predictive Pricing Tool's Insufficient Accuracy
2,021
division shutdown
ai
bias, content, false
ObjectId(625763e9343edc875fe63a79)
124
2019-10-24T00:00:00
[1450,1522,2262,2652,2651,2704,2856]
["unnamed-large-academic-hospital"]
["optum"]
["black-patients"]
Optum's algorithm deployed by a large academic hospital was revealed by researchers to have under-predicted the health needs of black patients, effectively de-prioritizing them in extra care programs relative to white patients with the same health burden.
Algorithmic Health Risk Scores Underestimated Black Patients’ Needs
2,019
Optum's algorithm
optum
bias, content, false
ObjectId(625763ec343edc875fe63a9c)
159
2019-03-29T00:00:00
[2471]
["tesla"]
["tesla"]
["tesla-drivers"]
Tencent Keen Security Lab conducted security research into Tesla’s Autopilot system and identified crafted adversarial samples and remote controlling via wireless gamepad as vulnerabilities to its system, although the company called into question their real-world practicality. This incident has been downgraded to an issue as it does not meet current ingestion criteria.
Tesla Autopilot’s Lane Recognition Allegedly Vulnerable to Adversarial Attacks
2,019
an issue
vulnerabilities
bias, content, false
ObjectId(625763ed343edc875fe63aa4)
167
2017-09-07T00:00:00
[1539]
["michal-kosinski","yilun-wang"]
["michal-kosinski","yilun-wang"]
["lgbtq-people","lgbtq-people-of-color","non-american-lgbtq-people"]
Researchers at Stanford Graduate School of Business developed a model that determined, on a binary scale, whether someone was homosexual using only his facial image, which advocacy groups such as GLAAD and the Human Rights Campaign denounced as flawed science and threatening to LGBTQ folks.
Researchers' Homosexual-Men Detection Model Denounced as a Threat to LGBTQ People’s Safety and Privacy
2,017
a binary scale
flawed science
bias, content, false
ObjectId(625763e9343edc875fe63a7c)
127
2020-06-06T00:00:00
[1456,1457,1458,1461,1486,1487,1488,1489,1490,1491,1492,1493]
["microsoft","msn.com"]
["microsoft"]
["jade-thirlwall","leigh-anne-pinnock"]
A news story published on MSN.com featured a photo of the wrong mixed-race person that was allegedly selected by an algorithm, following Microsoft’s layoff and replacement of journalists and editorial workers at its organizations with AI systems.
Microsoft’s Algorithm Allegedly Selected Photo of the Wrong Mixed-Race Person Featured in a News Story
2,020
the wrong mixed-race person
race person
bias, content, false
ObjectId(625763ed343edc875fe63aa0)
163
2021-11-21T00:00:00
[1533,1535,1652]
["facebook"]
["facebook"]
["facebook-users-of-minority-groups","facebook-users"]
Facebook’s hate-speech detection algorithms was found by company researchers to have under-reported less common but more harmful content that was more often experienced by minority groups such as Black, Muslim, LGBTQ, and Jewish users.
Facebook’s Hate Speech Detection Algorithms Allegedly Disproportionately Failed to Remove Racist Content towards Minority Groups
2,021
less common but more harmful content
harmful content
bias, content, false
ObjectId(625763ed343edc875fe63aa3)
166
2020-02-07T00:00:00
[1538,1563]
["giggle"]
["kairos"]
["trans-women","women-of-color"]
A social networking platform, Giggle, allegedly collected, shared to third-parties, and used sensitive information and biometric data to verify whether a person is a woman via facial recognition, which critics claimed to be discriminatory against women of color and harmful towards trans women.
Networking Platform Giggle Employs AI to Determine Users’ Gender, Allegedly Excluding Transgender Women
2,020
sensitive information
sensitive information
bias, content, false
ObjectId(6259b3d5c2337187617c53c3)
176
2022-03-02T00:00:00
[1557]
["oregon-state-university"]
["starship-technologies"]
["oregon-state-university","freight-train-crew"]
A Starship food delivery robot deployed by Oregon State University reportedly failed to cross the railroad, becoming stranded, and ending up being struck by an oncoming freight train.
Starship’s Autonomous Food Delivery Robot Allegedly Stranded at Railroad Crossing in Oregon, Run over by Freight Train
2,022
the railroad
railroad
bias, content, false
ObjectId(62842ee176e12cf335550ab1)
182
2018-06-11T00:00:00
[1573,1574]
["cruise"]
["cruise"]
["cruise-vehicles","cruise-driver-employee"]
In San Francisco, an autonomous Cruise Chevrolet Bolt collided with another Cruise vehicle driven by a Cruise human employee, causing minor scuffs to the cars but no human injuries.
Two Cruise Autonomous Vehicles Collided with Each Other in California
2,018
minor scuffs
human injuries
bias, content, false
ObjectId(625763ee343edc875fe63aaa)
173
2021-07-30T00:00:00
[1551]
["unknown"]
["unknown"]
["doctors","covid-patients"]
AI tools failed to sufficiently predict COVID patients, some potentially harmful.
AI Tools Failed to Sufficiently Predict COVID Patients, Some Potentially Harmful
2,021
failed
covid patients
bias, content, false
ObjectId(6297936efc298401e1ba35c0)
203
2022-02-10T00:00:00
[1659,1660,1661]
["uber"]
["uber"]
["uber-drivers"]
Uber launched a new but opaque algorithm to determine drivers' pay in the US which allegedly caused drivers to experience lower fares, confusing fare drops, and a decrease in rides.
Uber Launched Opaque Algorithm That Changes Drivers' Payments in the US
2,022
a decrease
confusing fare drops
bias, content, false
ObjectId(62849d9dcb05238c61a5cc65)
184
2018-04-12T00:00:00
[1581,1584,1899]
["companhia-do-metropolitano-de-sao-paulo"]
["securos"]
["sao-paulo-metro-users","sao-paulo-citizens"]
A facial recognition program rolled out by São Paulo Metro Stations was suspended following a court ruling in response to a lawsuit by civil society organizations, who cited fear of it being integrated with other electronic surveillance entities without consent, and lack of transparency about the biometric data collection process of metro users.
Facial Recognition Program in São Paulo Metro Stations Suspended for Illegal and Disproportionate Violation of Citizens’ Right to Privacy
2,018
a lawsuit
lawsuit
bias, content, false
ObjectId(628ad91f7da5b905fb4444b8)
195
2015-09-01T00:00:00
[1622,1623,1624,1625,1626,1627,1628,1629,1630,1631,1632,1843]
["pasco-sheriff's-office"]
["unknown"]
["pasco-residents","pasco-black-students","pasco-students-with-disabilities"]
The Intelligence-Led Policing model rolled out by the Pasco County Sheriff’s Office was allegedly developed based on flawed science and biased data that also contained sensitive information and irrelevant attributes about students, which critics said to be discriminatory.
Predictive Policing Program by Florida Sheriff’s Office Allegedly Violated Residents’ Rights and Targeted Children of Vulnerable Groups
2,015
biased data
biased data
bias, content, false
ObjectId(629dce346e8239f700dfecbf)
213
2020-07-01T00:00:00
[1715,1716,1717,1718,1719]
["facebook"]
["facebook"]
["facebook-users"]
The performance of Facebook’s political ad detection was revealed by researchers to be imprecise, uneven across countries in errors, and inadequate for preventing systematic violations of political advertising policies.
Facebook’s Political Ad Detection Reportedly Showed High and Geographically Uneven Error Rates
2,020
systematic violations
systematic violations
bias, content, false
ObjectId(625763ee343edc875fe63aab)
174
2022-02-28T00:00:00
[1552,1585,1595,1599]
["unknown"]
["unknown"]
["linkedin-users"]
More than a thousand inauthentic LinkedIn profiles using allegedly GAN-generated photos were notified by researchers at Stanford to LinkedIn’s staff, and many of which were removed for violating rules against creating fake profiles and falsifying information.
Fake LinkedIn Profiles Created Using GAN Photos
2,022
fake profiles
fake profiles
bias, content, false
ObjectId(628af245a8f82bdc4c020cc2)
198
2022-03-16T00:00:00
[1642,1643,1644,1645,1646]
["hackers"]
["unknown"]
["ukrainian-social-media-users","ukrainian-public","volodymyr-zelenskyy"]
A quickly-debunked deepfaked video of the Ukrainian President Volodymyr Zelenskyy was posted on various Ukrainian websites and social media platforms encouraging Ukrainians to surrender to Russian forces during the Russia-Ukraine war.
Deepfake Video of Ukrainian President Yielding to Russia Posted on Ukrainian Websites and Social Media
2,022
the Russia-Ukraine war
ukraine war
bias, content, false
ObjectId(625763ee343edc875fe63aac)
175
2022-04-01T00:00:00
[1553,1554,1606,1607,1608]
["cruise"]
["cruise"]
["san-francisco-public","cruise-customers"]
An autonomous Chevy Bolt operated by Cruise was pulled over in San Francisco, and as the police attempted to engage with the car, it reportedly bolted off, pulled over again, and put on its hazards lights on at a point farther down the road.
Cruise Autonomous Taxi Allegedly Bolted off from Police After Being Pulled over in San Francisco
2,022
its hazards
hazards lights
bias, content, false
ObjectId(6269ca6f01cc3d7da1e059ad)
178
2022-04-21T00:00:00
[1560,1565,1566,1567,1568,1569,1570,1594]
["tesla"]
["tesla"]
["tesla-owner","vision-jet-owner"]
A Tesla Model Y was shown on video slowly crashing into a Vision Jet in Spokane, Washington, allegedly due to its owner activating the “Smart Summon” feature.
Tesla Owner Activated "Smart Summon" Feature, Causing a Collision with an Aircraft in a Washington Airport
2,022
a Vision Jet
feature
bias, content, false
ObjectId(62842764c4ac5276446aed58)
180
2020-02-19T00:00:00
[1564,1582,2236]
["malaysian-judiciary","malaysian-courts"]
["sarawak-information-systems"]
["malaysian-convicted-people"]
The AI system used by the Malaysian judiciary which explicitly considered age, employment, and socio-economic data provided sentencing to a drug possession case that was alleged by lawyer to be disproportionately high for the crime committed.
Algorithm Used by the Malaysian Judiciary Reportedly Recommended Unusually High Sentencing to a Drug Possession Case
2,020
the crime
crime
bias, content, false
ObjectId(628498c9ba5ecc08807ab7d9)
183
2017-07-01T00:00:00
[1576,1577,1578,1579,1580,2066]
["airbnb"]
["airbnb","trooly"]
["sex-workers","airbnb-users"]
Airbnb allegedly considered publicly available data on users to gauge their trustworthiness via algorithmic assessment of personality and behavioral traits, resulting in unexplained bans and discriminatory bans against sex workers.
Airbnb's Trustworthiness Algorithm Allegedly Banned Users without Explanation, and Discriminated against Sex Workers
2,017
discriminatory bans
discriminatory bans
bias, content, false
ObjectId(62a18ffaae26c04e23bf1d27)
220
2020-11-11T00:00:00
[1731,1732,1969,2061]
["facebook"]
["facebook"]
["small-businesses-on-facebook"]
Facebook’s AI mistakenly blocked advertisements by small and struggling businesses, after the company allegedly leaned more on algorithms to monitor ads on the platform with little review from human moderators.
Facebook Mistakenly Blocked Small Business Ads
2,020
little review
ai
bias, content, false
ObjectId(628ad7417da5b905fb43f208)
194
2018-02-01T00:00:00
[1621]
["unnamed-australian-telecommunications-company"]
["unknown"]
["unnamed-australian-telecommunications-company"]
In early 2018, an Australian telecommunications company’s incident management AI excessively deployed technicians into the field, and was allegedly unable to be stopped by the automation team.
Australian Telco’s Incident Management Bot Excessively Sent Technicians in the Field by Mistake, Allegedly Costing Millions
2,018
the field
field
bias, content, false
ObjectId(628b0ea3db7d62b8a823c307)
200
2019-03-01T00:00:00
[1653]
["scammers"]
["unknown"]
["unnamed-uk-based-energy-firm's-ceo"]
Fraudsters allegedly used AI voice technology to impersonate the boss of a UK-based firm's CEO, demanding a transfer of €220,000 over the phone.
Fraudsters Used AI to Mimic Voice of a UK-Based Firm's CEO's Boss
2,019
a transfer
transfer
bias, content, false
ObjectId(62988029093243282c69c2b2)
206
2015-03-01T00:00:00
[1675,1676,1677,1678]
["tinder"]
["tinder"]
["tinder-users-over-30-years-old"]
Tinder’s personalized pricing was found by Consumers International to consider age as a major determinant of pricing, and could be considered a direct discrimination based on age, according to anti-discrimination law experts.
Tinder's Personalized Pricing Algorithm Found to Offer Higher Prices for Older Users
2,015
a direct discrimination
direct discrimination
bias, content, false
ObjectId(6285d00023ec6cb0db5af13a)
186
2007-07-26T00:00:00
[1590,1591,1592,1593,1788,1933,1934]
["spanish-ministry-of-interior"]
["spanish-secretary-of-state-for-security","spanish-ministry-of-interior"]
["spanish-victims-of-gender-violence"]
In Spain, the algorithm that assesses recidivism risk in gender violence, VioGén, have critically underestimated the level of risk in a series of cases that ended in homicide of women and children since its first deployment.
Algorithm Assessing Risk Faced by Victims of Gender Violence Misclassified Low-Risk Cases, Allegedly Leading to Homicide of Women and Children in Spain
2,007
gender violence
risk
bias, content, false
ObjectId(629791afc7e109ab6bc28b6f)
202
2021-12-06T00:00:00
[1655,1656,1657,1658,1721]
["yoon-suk-yeol","yoon-suk-yeol's-campaign"]
["unknown"]
["korean-public"]
A South Korean political candidate created a deepfake avatar which political opponents alleged to be fraudulent and a threat to democracy.
Korean Politician Employed Deepfake as Campaign Representative
2,021
a threat
threat
bias, content, false
ObjectId(629f0c2548f09c92aeb5fe4d)
216
2017-10-10T00:00:00
[1724,1924,1925,1926,1927]
["wechat"]
["wechat"]
["black-wechat-users"]
The Chinese platform WeChat provided an inappropriate and racist English translation for the Chinese term for “black foreigner” in its messaging app.
WeChat’s Machine Translation Gave a Racist English Translation for the Chinese Term for “Black Foreigner”
2,017
“black foreigner
black foreigner
bias, content, false
ObjectId(62a196265fb208d11b3108fa)
221
2022-03-07T00:00:00
[1733,1734]
["tesla"]
["tesla"]
["road-engineer"]
In Taiwan, a Tesla Model 3 on Autopilot mode whose driver did not pay attention to the road collided with a road repair truck; a road engineer immediately placed crash warnings in front of the Tesla, but soon after got hit and was killed by a BMW when its driver failed to see the sign and crashed into the accident.
A Road Engineer Killed Following a Collision Involving a Tesla on Autopilot
2,022
the accident
accident
bias, content, false
ObjectId(626a2b97f9c5ab809bbc9af1)
179
2022-04-01T00:00:00
[1561,1562,1874]
["openai"]
["openai"]
["minority-groups","underrepresented-groups"]
OpenAI's image-generation-from-natural-language-description model, DALL-E 2, was shown to have various risks pertaining to its use, such as misuse as disinformation, explicit content generation, and reinforcement of gender and racial stereotypes, which were acknowledged by its developers.
Images Generated by OpenAI’s DALL-E 2 Exhibited Bias and Reinforced Stereotypes
2,022
racial stereotypes
misuse
bias, content, false
ObjectId(625763ee343edc875fe63aa9)
172
2020-07-01T00:00:00
[1550]
["appriss"]
["appriss"]
["american-physicians","american-pharmacists","american-patients-of-minority-groups","american-patients"]
NarxCare's algorithm assessing a patient’s overdose risk allegedly did not undergo peer-reviewed validation studies, and considered sensitive data with high risk of biases towards women and Black patients such as experience of sexual abuse and criminality.
NarxCare’s Risk Score Model Allegedly Lacked Validation and Trained on Data with High Risk of Bias
2,020
sexual abuse
high risk
bias, content, false
ObjectId(62842db6dee309a4a8e14d4b)
181
2022-02-11T00:00:00
[1571,1572]
["cruise"]
["cruise"]
["cruise-vehicle"]
A BMW Sedan reportedly made an illegal left turn, causing a minor collision but no injuries with a Cruise autonomous vehicle (AV) operating in autonomous mode.
BMW Sedan Made a Prohibited Left Turn, Colliding with a Cruise Autonomous Vehicle
2,022
a minor collision
minor collision
bias, content, false
ObjectId(62868592e0a9519a0ba08a94)
190
2017-01-15T00:00:00
[1610,1611,1612,1613]
["bytedance"]
["bytedance"]
["instagram-users","snapchat-users","american-social-media-users"]
ByteDance allegedly scraped short-form videos, usernames, profile pictures, and descriptions of accounts on Instagram, Snapchat, and other sources, and uploaded them without consent on Flipagram, TikTok’s predecessor, in order to improve its “For You” algorithm's performance on American users.
ByteDance Allegedly Trained "For You" Algorithm Using Content Scraped without Consent from Other Social Platforms
2,017
short-form videos
flipagram
bias, content, false
ObjectId(626331bad17b021fce12b51b)
177
2022-04-19T00:00:00
[1558,1583,1600,1601,1602]
["google-docs"]
["google-docs"]
["google-docs-users"]
Google’s “inclusive language” feature prompting writers to consider alternatives to non-inclusive words reportedly also recommend alternatives for words such as “landlord” and “motherboard,” which critics said was a form of obtrusive, unnecessary, and bias-reinforcing speech-policing.
Google’s Assistive Writing Feature Provided Allegedly Unnecessary and Clumsy Suggestions
2,022
non-inclusive words
words
bias, content, false
ObjectId(628681d73a32758144dc742b)
188
2018-04-11T00:00:00
[1603,1604,1782,1605]
["salta-city-government"]
["microsoft"]
["salta-teenage-girls","salta-girls-of-minority-groups"]
In 2018, during the abortion-decriminalization debate in Argentina, the Salta city government deployed a teenage-pregnancy predictive algorithm built by Microsoft that allegedly lacked a defined purpose, explicitly considered sensitive information such as disability and whether their home had access to hot water.
Argentinian City Government Deployed Teenage-Pregnancy Predictive Algorithm Using Invasive Demographic Data
2,018
explicitly considered sensitive information
sensitive information
bias, content, false
ObjectId(628686fce0eed158517d4796)
191
2020-10-06T00:00:00
[1614,1615]
["naver"]
["naver"]
["naver-customers"]
The Korean Fair Trade Commission (FTC) imposed a 26.7B KRW on Naver for manipulating shopping and video search algorithms, favoring its own online shopping business to boost its market share.
Korean Internet Portal Giant Naver Manipulated Shopping and Video Search Algorithms to Favor In-House Services
2,020
manipulating
krw
bias, content, false
ObjectId(628b0fb73a32758144c2c21d)
201
2020-04-14T00:00:00
[1654,2435]
["extinction-rebellion-belgium"]
["unknown"]
["shophie-wilmes","belgian-government"]
A deepfake video showing the Belgium’s prime minister speaking of an urgent need to tackle the climate crises was released by a climate action group.
Climate Action Group Posted Deepfake of Belgian Prime Minister Urging Climate Crisis Action
2,020
an urgent need
deepfake video
bias, content, false
ObjectId(62986c4c093243282c6578f5)
204
2022-02-11T00:00:00
[1662,1663,1664,1665]
["zhihu"]
["sangfor-technologies"]
["zhihu-employees","chinese-tech-workers"]
The firing of an employee at Zhihu, a large Q&A platform in China, was allegedly caused by the use of a behavioral perception algorithm which claimed to predict a worker’s resignation risk using their online footprints, such as browsing history and internal communication.
A Chinese Tech Worker at Zhihu Fired Allegedly via a Resignation Risk Prediction Algorithm
2,022
the use
resignation risk
bias, content, false
ObjectId(629da7969e8fc9073246a3f2)
209
2020-10-20T00:00:00
[1688,1689,1690,1691,1692]
["tesla"]
["tesla"]
["tesla-drivers"]
The “rolling stop” functionality within the “Aggressive” Full Self Driving (FSD) profile that was released via a Tesla firmware update was recalled and disabled.
Tesla Disabled “Rolling Stop” Functionality Associated with the “Aggressive” Driving Mode
2,020
The “rolling stop” functionality
rolling stop
bias, content, false