_id
stringlengths
34
34
incident_id
int64
1
524
date
timestamp[ns]
reports
stringlengths
4
191
Alleged deployer of AI system
stringlengths
7
214
Alleged developer of AI system
stringlengths
7
127
Alleged harmed or nearly harmed parties
stringlengths
8
371
description
stringlengths
50
371
title
stringlengths
6
170
year
int64
1.98k
2.02k
spacy_negative_outcomes
stringlengths
3
54
keybert_negative_outcomes
stringlengths
2
41
Cluster
stringclasses
5 values
ObjectId(62a045b0d1bc84a9cc93ad7a)
219
2020-11-15T00:00:00
[1730]
["ezemvelo-kzn-wildlife"]
["unknown"]
["rhinos-in-conservation"]
AI cameras installed by Ezemvelo KZN Wildlife failed to detect poachers when four dehorned rhino carcasses were found.
Poachers Evaded AI Cameras and Killed Four Rhinos
2,020
failed
poachers
bias, content, false
ObjectId(6286854a3a32758144dd5fa7)
189
2019-10-15T00:00:00
[1609,1670,1671,1672,1673,1674]
["uk-department-of-work-and-pensions"]
["uipath"]
["people-with-disabilities"]
People with disabilities were allegedly disproportionately targeted by a benefit fraud detection algorithm which the UK’s Department of Work and Pensions was urged to disclose.
Opaque Fraud Detection Algorithm by the UK’s Department of Work and Pensions Allegedly Discriminated against People with Disabilities
2,019
fraud
disabilities
bias, content, false
ObjectId(629f00a448f09c92aeb39c9e)
214
2020-01-02T00:00:00
[1722]
["lockport-city-school-district"]
["sn-technologies"]
["black-students"]
SN Technologies allegedly misled Lockport City Schools about the performance of its AEGIS face and weapons detection systems, downplaying error rates for Black faces and weapon misidentification.
SN Technologies Reportedly Lied to a New York State School District about Its Facial and Weapon Detection Systems’ Performance
2,020
error rates
error rates
bias, content, false
ObjectId(629f09325fb208d11b8efe2d)
215
2020-04-01T00:00:00
[1723]
["facebook"]
["facebook"]
["facebook-content-moderators"]
Content moderators and employees at Facebook demand better working conditions, as automated content moderation system allegedly failed to achieve sufficient performance and exposed human reviewers to psychologically hazardous content such as graphic violence and child abuse.
Facebook Content Moderators Demand Better Working Conditions Due to Allegedly Inadequate AI Content Moderation
2,020
graphic violence
hazardous content
bias, content, false
ObjectId(6286888d158fde27b10d8dc5)
192
2022-03-17T00:00:00
[1616,1617]
["estee-lauder"]
["hirevue"]
["pseudonymous-estee-lauder's-former-staff"]
Three make-up artists lost their positions following an algorithmically-assessed video interview by HireVue who reportedly failed to provide adequate explanation of the findings.
Three Make-Up Artists Lost Jobs Following Black-Box Automated Decision by HireVue
2,022
their positions
findings
bias, content, false
ObjectId(629f0db656a7be53bbed68fa)
217
2016-11-16T00:00:00
[1725,1726]
["evolver"]
["evolver"]
["fair-visitors"]
At the 18th China Hi-Tech Fair, a robot suddenly smashed through a glass booth and injured a visitor, after a staff member reportedly mistakenly pressed a button, causing it to reverse and accelerate.
Robot at a Chinese Tech Fair Smashed a Glass Booth, Injuring a Visitor
2,016
a button
button
bias, content, false
ObjectId(629c5b57fbbeec2d0fb4fc64)
208
2021-05-01T00:00:00
[1683,1684,1685,1686,1687,1759,1760,1761]
["tesla"]
["tesla"]
["tesla-drivers"]
In late 2021, Tesla owners’ complaints to the National Highway Traffic Safety Administration about sudden unexpected automatic braking rapidly increased, coinciding with when radar was no longer equipped in its Model 3 and Model Y vehicles.
Tesla Phantom Braking Complaints Surged, Allegedly Linked to Tesla Vision Rollout
2,021
about sudden unexpected automatic braking
complaints
bias, content, false
ObjectId(629dc73db462c8b2647f965e)
212
2021-01-01T00:00:00
[1711,1712,1713,1714]
["xpeng-motors"]
["unknown"]
["xpeng-motors-customers"]
The Chinese electric vehicle (EV) firm XPeng Motors was fined by local market regulators for illegally collecting in-store customers’ facial images without their consent for six months.
XPeng Motors Fined For Illegal Collection of Consumers’ Faces Using Facial Recognition Cameras
2,021
their consent
ev
bias, content, false
ObjectId(628ad5c80d8089cc43894760)
193
2013-11-27T00:00:00
[1620]
["target"]
["fireeye"]
["target","target-customers"]
Alerts about a Target data breach were ignored by Minneapolis Target’s staff reportedly due to them being included with many other potential false alerts, and due to some of the company’s network infiltration alerting systems being off to reduce such false alerts, causing private data theft for millions of customers.
Excessive Automated Monitoring Alerts Ignored by Staff, Resulting in Private Data Theft of Seventy Million Target Customers
2,013
such false alerts
such false alerts
bias, content, false
ObjectId(628ae659d50929fc8d419df7)
196
2013-09-01T00:00:00
[1633,1634,1635,1636,1637]
["pakistan-national-database-and-registration-authority"]
["pakistan-national-database-and-registration-authority"]
["pakistani-citizens"]
When the leader of the Afghan Taliban was found possessing a valid ID card in the Pakistani national biometric identification database system, Pakistan launch a national re-verification campaign that is linked to numerous changes in recognition status and loss of services.
Compromise of National Biometric ID Card System Leads to Reverification and Change of Status
2,013
numerous changes
recognition status
bias, content, false
ObjectId(629c4e9f9bed6f7732c7ee3f)
207
2021-01-10T00:00:00
[1679,1680,1681,1682]
["honolulu-police-department"]
["boston-dynamics"]
["honolulu-homeless-people"]
Honolulu Police Department spent federal pandemic relief funds on a robot dog to take body temperatures and patrol a homeless quarantine encampment which local civil rights advocates criticized as dehumanizing.
Hawaii Police Deployed Robot Dog to Patrol a Homeless Encampment
2,021
body temperatures
robot dog
bias, content, false
ObjectId(629dbfd2927145fff913f831)
211
2021-12-11T00:00:00
[1696,1697,1698,1699,1700,1701,1702]
["taxis-g7"]
["tesla"]
["pedestrians"]
In Paris, about 20 people were injured in an accident involving a Tesla Model 3 taxi cab which was reportedly caused by a sudden unintended acceleration (SUA) episode and braking issues.
A Tesla Taxi Cab Involved in an Accident in Paris with Twenty Injuries
2,021
an accident
accident
bias, content, false
ObjectId(6285d69123ec6cb0db5c574a)
187
2022-02-04T00:00:00
[1596,1597,1598]
["ai-addict"]
["tesla"]
["john-bernal","san-jose-public"]
A YouTuber who was a Tesla’s employee conducted an on-road review of Tesla's Full Self Driving (FSD) Beta, showing its navigation in various road environments in San Jose and collision with a bollards during Autopilot, allegedly causing his dismissal from the company.
YouTuber Tested Tesla on Self Driving Mode, Colliding with Street Pylons
2,022
his dismissal
fsd
bias, content, false
ObjectId(6285caf4c4ac527644be031f)
185
2022-03-01T00:00:00
[1586,1587,1588,1589]
["tiktok"]
["tiktok"]
["tiktok-users","tiktok-new-users"]
An investigation by NewsGuard into TikTok’s handling of content related to the Russia-Ukraine war showed its “For You” algorithm pushing new users towards false and misleading content about the war within less than an hour of signing up.
TikTok's "For You" Algorithm Directed New Users towards Disinformation about the War in Ukraine
2,022
false and misleading content
misleading content
bias, content, false
ObjectId(629874f4257531f2d69d1030)
205
2022-02-25T00:00:00
[1666,1667,1668,1669]
["individuals-in-the-donbass-region","individuals-in-russia","media-organizations-in-crimea"]
["unknown"]
["ukrainian-social-media-users"]
According to security reports by Meta, fictitious personas with GAN-generated profile pictures were used by people operating in Russia and Ukraine to push a disinformation campaign targeting Ukrainian social media users, and were taken down.
AI-Generated Profiles Used in Disinformation Campaign Targeting Ukrainians
2,022
a disinformation campaign
fictitious personas
bias, content, false
ObjectId(629f165a47b12f3b70c05fa4)
218
2020-06-01T00:00:00
[1727,1728,1950]
["tesla"]
["tesla"]
["delivery-truck","pedestrians","tesla-drivers"]
On a highway in Taiwan, a Tesla Sedan, reportedly operating on Autopilot mode, crashed into a large overturned truck, barely missing a pedestrian.
Tesla on Autopilot Crashed into Flipped Truck on Taiwan Highway
2,020
a highway
highway
bias, content, false
ObjectId(628aeecad50929fc8d43513b)
197
2021-10-01T00:00:00
[1638,1639,1640,1641]
["facebook"]
["facebook"]
["facebook-users"]
Facebook's internal report showed an at-least six-month long alleged software bug that caused moderator-flagged posts and other harmful content to evade down-ranking filters, leading to surges of misinformation on users' News Feed.
Facebook Internally Reported Failure of Ranking Algorithm, Exposing Harmful Content to Viewers over Months
2,021
other harmful content
software bug
bias, content, false
ObjectId(628af5401212a93e232c56d3)
199
2019-04-01T00:00:00
[1647,1648,1649,1650,1651,2024,2031]
["ever-ai"]
["ever-ai"]
["ever-ai-users"]
Ever AI, now Paravision AI, allegedly failed to inform customers about the development and use of facial recognition that facilitates the sale of customers’ data to various businesses, a business model that critics said was an egregious violation of privacy.
Ever AI Reportedly Deceived Customers about FRT Use in App
2,019
an egregious violation
ai
bias, content, false
ObjectId(629db470fee4758bdce67001)
210
2020-04-28T00:00:00
[1693,1694,1695]
["bharatiya-janata-yuva-morcha"]
["persistent-systems"]
["indian-voters","indian-social-media-users","indian-women-journalists"]
The Indian political social media app Tek Fog allegedly allowed operatives affiliated with the ruling political party to hijack social media trends and manipulate public opinion on other apps such as Twitter and WhatsApp, which opposition parties denounced as a national security threat.
Indian Political App Tek Fog Allegedly Hijacked Trends and Manipulated Public Opinion on Other Social Media Platforms
2,020
Tek Fog
whatsapp
bias, content, false
ObjectId(62a7205b15d14c6d6ceba1a5)
235
2016-04-15T00:00:00
[1756]
["ping-an"]
["ping-an"]
["ping-an-customers","chinese-minority-groups"]
Customers’ untrustworthiness and unprofitability were reportedly determined by Ping An, a large insurance company in China, via facial-recognition measurements of micro-expressions and body-mass indices (BMI), which critics argue was likely to make mistakes, discriminate against certain ethnic groups, and undermine its own industry.
Chinese Insurer Ping An Employed Facial Recognition to Determine Customers’ Untrustworthiness, Which Critics Alleged to Likely Make Errors and Discriminate
2,016
Customers’ untrustworthiness
untrustworthiness
bias, content, false
ObjectId(62a6c3e11cea23c4caf38bc5)
227
2018-01-12T00:00:00
[1744,1973,1974]
["waze"]
["waze"]
["tourists","waze-users"]
The tourists driving through Vermont blamed Waze for directing them into a boat launch in Lake Champlain, prompting the vehicle to slide into the water by the time the drivers realized their location in the dark and foggy weather.
Waze App Allegedly Caused Tourists’ Car to End up in Lake Champlain, Vermont
2,018
the time
waze
bias, content, false
ObjectId(62a19d1f6a8a811a6084ea37)
222
2020-07-18T00:00:00
[1735]
["satria-technologies"]
["openai"]
["thoughts-users","twitter-users"]
Tweets created by Thoughts, a tweet generation app that leverages OpenAI’s GPT-3, allegedly exhibited toxicity when given prompts related to minority groups.
Thoughts App Allegedly Created Toxic Tweets
2,020
minority groups
toxicity
bias, content, false
ObjectId(62a1a65fae26c04e23c48fcd)
223
2019-10-09T00:00:00
[1737]
["hive-box"]
["hive-box"]
["hive-box-customers"]
Facial-recognition locks by Hive Box, an express delivery locker company in China, were easily opened by a group of fourth-graders in a science-club demo using only a printed photo of the intended recipient’s face, leaving contents vulnerable to theft.
Hive Box Facial-Recognition Locks Hacked by Fourth Graders Using Intended Recipient’s Facial Photo
2,019
a group
theft
bias, content, false
ObjectId(62a6d3542cccb6726ae25091)
229
2018-04-23T00:00:00
[1746,1747]
["youtube"]
["youtube"]
["youtube-users","youtube-content-creators"]
YouTube’s thumbnail monitoring system was allegedly evaded by content farms such as ones in Cambodia who spike viewership and generate ad revenue using bestiality-themed thumbnails.
Content Using Bestiality Thumbnails Allegedly Evaded YouTube’s Thumbnail Monitoring System
2,018
evaded
thumbnails
bias, content, false
ObjectId(62a1ab6756a7be53bb9096a4)
224
2020-07-01T00:00:00
[1738]
["wechat-pay"]
["wechat"]
["wechat-pay-users"]
In China, fraudsters bypassed facial-recognition security for online financial transactions on WeChat Pay by crafting identity-verification GIFs of victims from their selfies on WeChat Moments, a social media platform.
WeChat Pay's Facial Recognition Security Evaded by Scammers Using Victims’ Social Media Content
2,020
WeChat Pay
wechat moments
bias, content, false
ObjectId(62a98ef2cfb6a09201e5595e)
239
2009-09-01T00:00:00
[1764]
["intensive-partnerships-for-effective-teaching"]
["intensive-partnerships-for-effective-teaching"]
["students","low-income-minority-students","teachers"]
Gates-Foundation-funded Intensive Partnerships for Effective Teaching Initiative’s algorithmic program to assess teacher performance reportedly failed to achieve its goals for student outcomes, particularly for minority students, and was criticized for potentially causing harm against teachers.
Algorithmic Teacher Evaluation Program Failed Student Outcome Goals and Allegedly Caused Harm Against Teachers
2,009
its goals
teacher performance
bias, content, false
ObjectId(62a1b2ba9b0df3a9e564faf2)
225
2017-04-07T00:00:00
[1739,1740]
["jupiter-hospital","memorial-sloan-kettering"]
["ibm-watson-health"]
["oncologists","cancer-patients"]
Internal documents from IBM Watson Health showed negative assessments from customers such as Florida’s Jupiter Hospital and Memorial Sloan Kettering criticizing its Watson for Oncology product for allegedly unsafe and incorrect cancer treatment recommendations.
IBM Watson for Oncology Criticized by Customers for Allegedly Unsafe and Inaccurate Cancer Treatment Recommendations
2,017
negative assessments
negative assessments
bias, content, false
ObjectId(62a70b1e97c1945c062019f1)
232
2018-04-29T00:00:00
[1751,1752,1975,2018]
["tesla"]
["tesla"]
["yoshihiro-umeda","pedestrians","tesla-drivers"]
A Tesla Model X operated on Autopilot reportedly failed to recognize the parked motorcycles, pedestrians, and van in its path in Kanagawa, Japan, and ran over a motorcyclist who previously stopped when a member of his motorcyclist group was involved in an accident.
Tesla Model X on Autopilot Missed Parked Vehicles and Pedestrians, Killing Motorcyclist in Japan
2,018
an accident
autopilot
bias, content, false
ObjectId(62de08794ad8b68d9e3be1ad)
242
2021-02-24T00:00:00
[1775]
["chakan-plant-of-automotive-stampings-and-assemblies"]
["unknown"]
["umesh-ramesh-dhake"]
A sensor snag resulted in an automotive parts factory robot falling on a factory worker in India
Manufacturing Robot Failure Caused Factory Worker's Death in India
2,021
A sensor snag
sensor snag
bias, content, false
ObjectId(62df75b523ef2c676c07d179)
244
2020-08-03T00:00:00
[1783]
["aurora-police-department"]
["unknown"]
["the-gilliam-family"]
An automated plate reader reportedly matched a license plate information, but of a family’s minivan and an alleged motorcycle in Montana that was reportedly stolen earlier in the year, resulting in them and their children being held at gunpoint and detained in handcuffs by multiple Aurora police officers.
Colorado Police’s Automated License Plate Reader (ALPR) Matched a Family’s Minivan’s Plate to That of a Stolen Vehicle Allegedly, Resulting in Detainment at Gunpoint
2,020
the year
alleged motorcycle
bias, content, false
ObjectId(62a70f6215d14c6d6ce9e579)
233
2018-12-03T00:00:00
[1753]
["tumblr"]
["tumblr"]
["tumblr-content-creators","tumblr-users"]
Tumblr’s automated tools to identify adult content were reported to have incorrectly flagged inoffensive images as explicit, following its announcement to ban all adult content on the platform.
Tumblr Automated Pornography-Detecting Algorithms Erroneously Flagged Inoffensive Images as Explicit
2,018
inoffensive images
platform
bias, content, false
ObjectId(62a6d8341cea23c4caf855ce)
230
2019-03-01T00:00:00
[1748]
["tesla"]
["tesla"]
["jeremy-beren-banner","tesla-users"]
In Florida, a Model 3 Tesla on Autopilot mode crashed into a tractor-trailer truck, killing the 50-year-old driver.
Model 3 Tesla on Autopilot Crashed into a Truck in Florida, Killing Driver
2,019
Autopilot mode
autopilot mode
bias, content, false
ObjectId(62df76e9a6f43c979e242859)
246
2014-04-16T00:00:00
[1785,1787]
["prairie-village-police-department"]
["unknown"]
["mark-molner"]
An automated license plate reader (ALPR) camera misread a 7 as a 2 and incorrectly alerted the local police about a stolen Oldsmobile car, which was allegedly not able to be verified by an officer before a traffic stop was effected on a BMW in Kansas City suburb.
Misreading of an Automated License Plate Reader (ALPR) Unverified by Police, Resulting in Traffic Stop in Missouri
2,014
a traffic stop
camera misread
bias, content, false
ObjectId(62a6c8ed1b7b69ce98ebe04c)
228
2019-02-01T00:00:00
[1745]
["apple"]
["apple"]
["tourists","apple-maps-users"]
Near Los Angeles, Apple Maps allegedly directed a couple on a ski trip in the mountains toward into an unconventional route out of town, where the drivers found themselves lost and stuck on an unpaved road in the snow.
Apple Maps Allegedly Directed Ski Trip Couple Onto Unpaved Road in the Mountains
2,019
an unconventional route
unconventional route
bias, content, false
ObjectId(62a6dccdf24b23be1794e864)
231
2016-01-20T00:00:00
[1749,1750,208,1945]
["tesla"]
["tesla"]
["gao-yaning","tesla-drivers"]
A Tesla Model S collided with and killed a road sweeper on a highway near Handan, China, an accident where Tesla previously said it was not able to determine whether Autopilot was operating at the time of the crash.
A Tesla Crashed into and Killed a Road Sweeper on a Highway in China
2,016
the crash
crash
year, risk, crash
ObjectId(62df765656418e2a5bdd7c06)
245
2009-03-30T00:00:00
[1784]
["san-francisco-police-department"]
["unknown"]
["denise-green"]
In San Francisco, an automated license plate reader (ALPR) camera misread a number as belonging to a stolen vehicle having the wrong make, but its photo was not visually confirmed by the police due to poor quality and allegedly despite multiple chances prior to making a traffic stop, causing an innocent person to be pulled over at gunpoint and restrained in handcuffed.
Unverified Misreading by Automated Plate Reader Led to Traffic Stop and Restraint of an Innocent Person at Gunpoint in California
2,009
the wrong make
poor quality
bias, content, false
ObjectId(62e0aad0a6f43c979e3e6e1b)
248
2018-11-23T00:00:00
[1789,1790]
["contra-costa-county-sheriff"]
["vigilant-solutions"]
["brian-hofer"]
In Oakland, a previously stolen rental car that was returned but allegedly not updated in the police database was pinged by an automated license plate reader (ALPR) camera, leading to police’s wrongful detainment of an innocent person reportedly using excessive force and improper conduct.
Automated License Plate Camera Notified Police about a Previously Stolen Rental Car that was Returned, Causing an Innocent Person to be Detained at Gunpoint in California
2,018
improper conduct
wrongful detainment
bias, content, false
ObjectId(62a725b6a147e7dbd3894c5d)
236
2022-04-13T00:00:00
[1757]
["scammers"]
["unknown"]
["email-users"]
GAN faces were allegedly used by scammers alongside a parked domain and a fake website to impersonate a Boston law firm.
AI-Generated Faces Used by Scammers to Pose as a Law Firm in Boston
2,022
a fake website
scammers
bias, content, false
ObjectId(62a717230927356849c4d5df)
234
2019-09-06T00:00:00
[1754,1755]
["waze"]
["waze"]
["los-gatos-residents"]
Waze app was blamed by Los Gatos town residents for contributing to high wildfire hazard risk via allegedly routing weekend beach-going drivers through their neighborhoods, effectively choking off their single escape route in the event of a medical emergency or wildfire.
Waze Allegedly Frequently Routed Drivers through the Town of Los Gatos, Blocking Its Single Wildfire Escape Route
2,019
their single escape route
waze app
bias, content, false
ObjectId(62a8e5811afd5f65688b4c58)
238
2018-10-01T00:00:00
[1762]
["oregon-department-of-human-services"]
["oregon-department-of-human-services"]
["children-of-minority-groups","families-of-minority-groups"]
Oregon’s Department of Human Services (DHS) stopped using its Safety at Screening Tool, that is aimed to predict the risk that children wind up in foster care or be investigated in the future, and opted for a new process allegedly to reduce disparities and improve racially equitable decision-making.
Oregon’s Screening Tool for Child Abuse Cases Discontinued Following Concerns of Racial Bias
2,018
the risk
dhs
bias, content, false
ObjectId(62a31418ae26c04e232cbb19)
226
2015-04-01T00:00:00
[1741,1742,1743]
["waze"]
["waze"]
["sherman-oaks-residents","waze-users","los-angeles-city-government"]
For years, Waze has, in an attempt to cut travel times, allegedly caused more traffic and guided drivers to make unsafe and often un-permitted traffic decisions, which was described by a Los Angeles city council member as a threat to public safety.
Waze Allegedly Clogged Streets and Directed Drivers to Make Unsafe Traffic Decisions
2,015
a threat
waze
bias, content, false
ObjectId(62de06966bb8effab3aa069d)
241
2022-07-21T00:00:00
[1772,1773,1774,1776,1781]
["russian-chess-federation"]
["unknown"]
["child-named-christopher"]
A chess robot at a tournament in Russia broke the finger of a child who reached onto the board before the robot had completed its move
Chess-Playing Robot Broke Child's Finger in Russia
2,022
its move
board
bias, content, false
ObjectId(62b65a72b7a838d899c3005c)
240
2021-06-29T00:00:00
[1767,1768,1769,1770,2230]
["github","programmers"]
["github"]
["intellectual-property-rights-holders"]
Users of GitHub Copilot can produce source code subject to license requirements without attributing and licensing the code to the rights holder.
GitHub Copilot, Copyright Infringement and Open Source Licensing
2,021
the code
code
bias, content, false
ObjectId(62df67d35939a0bbe4e9d758)
243
2020-01-01T00:00:00
[1777,1778]
["unknown"]
["unknown"]
["twitter","twitter-users","twitter-users-participating-in-covid-19-discussions"]
Bots by anonymous actors were found by researchers to make up roughly half of Twitter accounts participating in COVID-19 discussions, many of which posted tweets about “reopening America“.
Bots Allegedly Made up Roughly Half of Twitter Accounts in Discussions Surrounding COVID-19 Related Issues
2,020
roughly half
bots
bias, content, false
ObjectId(62ea330f50582f2a6babdf2b)
270
2011-04-18T00:00:00
[1851]
["apple"]
["apple"]
["renren","buding-movie-tickets","yi-xia","dangdang","chinese-startups","chinese-companies"]
Following Apple’s changes in ranking algorithm in its iTunes App Store, apps by allegedly reputable companies and local startups in China experienced significant drops in ranking order.
Apple Tweaked App Store Ranking Algorithms, Allegedly Resulted in Demotion of Local Apps in China
2,011
significant drops
ranking order
bias, content, false
ObjectId(62ee0aa455716343a47d06a5)
271
2022-07-24T00:00:00
[1852,1861,1862]
["tesla"]
["tesla"]
["landon-embry","motorcyclists","tesla-drivers"]
A Tesla Model 3 operating on Autopilot mode slammed into the back of a Harley-Davidson motorcycle on an interstate in Utah, throwing the rider from the bike and killing him instantly.
Tesla Model 3 Sedan on Autopilot Killed Motorcyclist in a Rear-End Collision in Utah
2,022
the back
rider
bias, content, false
ObjectId(62ea2d14e98668f51871cdfa)
268
2020-03-16T00:00:00
[1849,1929]
["facebook","twitter","youtube"]
["facebook","twitter","youtube"]
["international-criminal-court-investigators","international-court-of-justice-investigators","investigative-journalists","criminal-investigators","victims-of-crimes-documented-on-social-media"]
Automated permanent removal of violating social media content such as terrorism, violent extremism, and hate speech without archival allegedly prevented its potential use to investigate serious crimes and hamper criminal accountability efforts.
Permanent Removal of Social Media Content via Automated Tools Allegedly Prevented Investigative Efforts
2,020
violent extremism
permanent removal
bias, content, false
ObjectId(62e0b35fd1725b4ba7c3444a)
251
2018-08-01T00:00:00
[1794,2384]
["amazon"]
["amazon"]
["small-businesses-on-amazon","amazon-customers"]
Amazon tweaked product-search algorithm to boost and guide customers towards more profitable in-house products instead of showing mainly most-relevant and best-selling listings, which its internal engineers and lawyers alleged to violate company’s best-for-customer principle.
Amazon Allegedly Tweaked Search Algorithm to Boost Its Own Products
2,018
its internal engineers
listings
bias, content, false
ObjectId(62f2040555716343a41ffb47)
273
2020-12-24T00:00:00
[1856]
["faceapp"]
["faceapp"]
["faceapp-non-binary-presenting-users","faceapp-transgender-users","faceapp-users"]
FaceApp’s algorithm was reported by a user to have predicted different genders for two mostly identical facial photos with only a slight difference in eyebrow thickness.
FaceApp Predicted Different Genders for Similar User Photos with Slight Variations
2,020
different genders
different genders
bias, content, false
ObjectId(62e78c54929b426d214e30ed)
259
2022-06-03T00:00:00
[1822,1842]
["yannic-kilcher"]
["yannic-kilcher"]
["internet-social-platform-users"]
A YouTuber built GPT-4chan, a model based on OpenAI’s GPT-J and trained on posts containing racism, misogyny, and antisemitism collected from 4chan’s “politically incorrect” board, which he made publicly available, and deployed as multiple bots posting thousands of messages on the same 4chan board as a prank.
YouTuber Built, Made Publicly Available, and Released Model Trained on Toxic 4chan Posts as Prank
2,022
a prank
prank
bias, content, false
ObjectId(62e51c32981a526a00e7e1b2)
255
2020-05-31T00:00:00
[1805,1806,1807,1808,1809,1431,1811,1812,1813]
["chicago-police-department"]
["shotspotter"]
["michael-williams"]
ShotSpotter audios were previously admitted to convict an innocent Black man in a murder case in Chicago, resulted in his nearly-one-year-long arrest before being dismissed by prosecutors as insufficient evidence.
Unreliable ShotSpotter Audio Previously Used to Convict Chicago Man in Murder Case
2,020
insufficient evidence
murder case
bias, content, false
ObjectId(62ea182550582f2a6ba7f130)
266
2022-01-15T00:00:00
[1844,2532,2533,2534,2535,2536,2537,2538]
["replika"]
["replika"]
["replika-users","replika-male-users","replika"]
Replika's AI-powered "digital companions" was allegedly abused by their users, who posted on Reddit abusive behaviors and interactions such as using slurs, roleplaying violent acts, and stimulating sexual abuse.
Replika's "AI Companions" Reportedly Abused by Its Users
2,022
sexual abuse
abusive behaviors
bias, content, false
ObjectId(62e0c365a6f43c979e413e86)
253
2022-05-18T00:00:00
[1796,1797,1798,1799]
["cruise"]
["cruise"]
["san-francisco-traffic-participants","san-francisco-public"]
Cruise’s autonomous vehicles were shown on video stopping in the middle of the road and causing blockages in San Francisco, as they were disabled allegedly due to lost connection to their company’s server.
Cruise's Self-Driving Cars Allegedly Lost Connection to Their Server, Causing Traffic Blockages in San Francisco
2,022
lost connection
video stopping
bias, content, false
ObjectId(62e0ad4dd1725b4ba7c29784)
249
2016-10-01T00:00:00
[1791,1792]
["chinese-government"]
["chinese-government"]
["uyghur-people","turkic-muslim-ethnic-groups"]
A suite of AI-powered digital surveillance systems involving facial recognition and analysis of biometric data were deployed by the Chinese government in Xinjiang to monitor and discriminate local Uyghur and other Turkic Muslims.
Government Deployed Extreme Surveillance Technologies to Monitor and Target Muslim Minorities in Xinjiang
2,016
facial recognition
facial recognition
system, recognition, facial
ObjectId(62e0cfd6a6f43c979e428116)
254
2015-05-01T00:00:00
[1804,2069]
["google"]
["google"]
["google-photos-users-residing-in-illinois","google-photos-users","illinois-residents"]
A class-action lawsuit alleged Google failing to provide notice, obtain informed written consent, or publish data retention policies about the collection, storage, and analysis of its face-grouping feature in Google Photos, which violated Illinois Biometric Information Privacy Act (BIPA).
Google’s Face Grouping Allegedly Collected and Analyzed Users’ Facial Structure without Consent, Violated BIPA
2,015
its face-grouping feature
action lawsuit
bias, content, false
ObjectId(62e53195eac42ca1004d3eea)
256
2021-11-07T00:00:00
[1814]
["chicago-police-department"]
["shotspotter"]
["chicago-drivers"]
A car stop resulting in a DUI arrest of its driver was allegedly based solely on a ShotSpotter alert, the reliability of which came into question by public defenders, who subpoenaed the company to assess its gunshot alert system.
DUI Arrest Case Allegedly Based Only on ShotSpotter's Alert
2,021
the reliability
car stop
bias, content, false
ObjectId(62e0afa523ef2c676c22b9b5)
250
2016-02-01T00:00:00
[1793]
["castricum-municipality"]
["castricum-municipality"]
["unnamed-property-owner"]
A home value generated by a black-box algorithm was reportedly defended by the Castricum court, which was criticized by a legal specialist for setting a dangerous precedent for accepting black-box algorithms as long as their results appear reasonable.
Dutch City Court Defended Home Value Generated by Black-Box Algorithm
2,016
a dangerous precedent
dangerous precedent
bias, content, false
ObjectId(62e6193e981a526a0059d8cf)
258
2022-05-13T00:00:00
[1819,1820]
["the-good-guys","kmart","bunnings"]
["unknown"]
["the-good-guys-customers","kmart-customers","bunnings-customers"]
Major Australian retailers reportedly analyzed in-store footage to capture facial features of their customers without consent, which was criticized by consumer groups as creepy and invasive.
Australian Retailers Reportedly Captured Face Prints of Their Customers without Consent
2,022
facial features
facial features
system, recognition, facial
ObjectId(62f2041455716343a41ffd1a)
274
2003-07-01T00:00:00
[1857,1859]
["virginia-courts"]
["virginia-department-of-criminal-justice-services"]
["virginia-convicted-felons","virginia-black-offenders","virginia-young-offenders"]
Virginia courts’ use of algorithmic predictions of future offending risks were found by researchers failing to reduce incarceration rates, showed racial and age disparities in risk scores and its application, and neither exacerbated or ameliorated historical racial differences in sentencing.
Virginia Courts’ Algorithmic Recidivism Risk Assessment Failed to Lower Incarceration Rates
2,003
racial and age disparities
historical racial differences
bias, content, false
ObjectId(62ea1f06e98668f5186fdbb5)
267
2017-06-15T00:00:00
[1845,1846,1847,1848,2101,2141,2142,2143,2144,2226]
["clearview-ai"]
["clearview-ai"]
["social-media-users","instagram-users","facebook-users"]
Face-matching algorithm by Clearview AI was built using scraped images from social media sites such as Instagram and Facebook without user consent, violating social media site policies, and allegedly privacy regulations.
Clearview AI Algorithm Built on Photos Scraped from Social Media Profiles without Consent
2,017
scraped images
clearview ai
bias, content, false
ObjectId(62e6066deac42ca100acf90b)
257
2012-05-04T00:00:00
[1815,1435,1821,2250]
["kansas-city-police-department","cleveland-division-of-police","chicago-police-department","atlanta-police-department"]
["shotspotter"]
["neighborhoods-of-color","brown-communities","black-communities","adam-toledo"]
Police departments disproportionately placed ShotSpotter sensors in black and brown neighborhoods, which is denounced by communities for allegedly creating dangerous situations, such as one involving in Adam Toledo's death.
Police Reportedly Deployed ShotSpotter Sensors Disproportionately in Neighborhoods of Color
2,012
dangerous situations
dangerous situations
bias, content, false
ObjectId(62e8aafa2db96a8ab9f74396)
264
2022-03-01T00:00:00
[1839]
["speedcam-anywhere"]
["speedcam-anywhere"]
["uk-drivers"]
Speedcam Anywhere, an app allowing users to document and report traffic violations via AI-based videographic speed estimation of a vehicle, raised concerns for UK drivers about its capabilities for surveillance and abuse.
AI-Based Vehicle Speed Estimation App Denounced by UK Drivers as Surveillance Technology
2,022
its capabilities
speedcam
bias, content, false
ObjectId(62e0babc23ef2c676c23eca3)
252
2022-06-01T00:00:00
[1795]
["none"]
["axon-enterprise"]
["us-schools","us-students"]
Axon Enterprise considered development of remotely operated drones capable of tasering at a target a short distance away as a defense mechanism for mass shootings, despite its internal AI ethics board’s previous objection and condemnation as dangerous and fantastical.
Remotely Operated Taser-Armed Drones Proposed by Taser Manufacturer as Defense for School Shootings in the US
2,022
a target
target
bias, content, false
ObjectId(62f1fd1da076fc957e0b0b26)
272
2019-10-08T00:00:00
[1853,1854,1855]
["grab"]
["grab"]
["non-tpi-registered-grab-drivers","grab-drivers-in-indonesia","grab-drivers"]
Grab Indonesia was fined by the Indonesian Competition Commission (KPPU) for unfairly favoring drivers who rented cars via the Grab-affiliated company Teknologi Pengangkutan Indonesia (TPI), including offering more rides via their matchmaking algorithm.
Grab Tweaked Matchmaking Algorithm, Providing Preferential Treatment to Drivers Registered with Affiliated Car Rental Service
2,019
more rides
grab
bias, content, false
ObjectId(62e7c7750a8b81000ba6c913)
260
2014-08-26T00:00:00
[1823,1831]
["us-department-of-homeland-security","us-citizenship-and-immigration-services"]
["us-citizenship-and-immigration-services"]
["us-naturalized-citizens","us-immigrants","us-citizenship-applicants","us-immigration-applicants"]
US Citizenship and Immigration Services (USCIS)’s ATLAS software used in vetting immigration requests was condemned by advocacy groups as a threat to naturalized citizens for its secretive algorithmic decision-making, reliance on poor quality data and unknown sources, and alleged discrimination of immigrants using biometric and sensitive information.
US DHS’s Opaque Vetting Software Allegedly Relied on Poor-Quality Data and Discriminated against Immigrants
2,014
alleged discrimination
uscis
bias, content, false
ObjectId(62e7cd5d138e1db3a1511a0a)
261
2017-11-15T00:00:00
[1824,1825,1826,1827,1828,1829,1830,1832]
["society-for-the-prevention-of-cruelty-to-animals"]
["knightscope"]
["san-francisco-homeless-people"]
Society for the Prevention of Cruelty to Animals (SPCA) deployed a Knightscope robot to autonomously patrol the area outside its office and ward off homeless people, which was criticized by residents as a tool of intimidation and ordered by the city of San Francisco to stop its use on a public right-of-way.
Robot Deployed by Animal Shelter to Patrol Sidewalks outside Its Office, Warding off Homeless People in San Francisco
2,017
its use
cruelty
bias, content, false
ObjectId(62e7dd265f2757eae1db9659)
262
2022-06-11T00:00:00
[1833,1834,1835,1836]
["boris-dayma"]
["boris-dayma","suraj-patil","pedro-cuenca","khalid-saifullah","tanishq-abraham","phuc-le-khac","luke-melas","ritobrata-ghosh"]
["minority-groups","underrepresented-groups"]
Publicly deployed open-source model DALL-E Mini was acknowledged by its developers and found by its users to have produced images which reinforced racial and gender biases.
DALL-E Mini Reportedly Reinforced or Exacerbated Societal Biases in Its Outputs as Gender and Racial Stereotypes
2,022
racial and gender biases
gender biases
bias, content, false
ObjectId(62e89ea3088b12099cc26a44)
263
2015-09-01T00:00:00
[1838]
["youtube"]
["youtube"]
["youtube-young-male-users","youtube-male-users","caleb-cain"]
YouTube’s personalization and recommendation algorithms were alleged to have pushed and exposed its young male users to political extremism and misinformation, driving them towards far-right ideologies such as neo-Nazism and white supremacy.
YouTube Recommendations Implicated in Political Radicalization of User
2,015
political extremism
personalization
bias, content, false
ObjectId(62e8b8575890dc007562661a)
265
2021-04-01T00:00:00
[1840,1841]
["uber-eats"]
["uber-eats"]
["pa-edrissa-manjang","uber-eats-black-delivery-drivers"]
A lawsuit by a former Uber Eats delivery driver alleged the company to have wrongfully dismissed him due to frequent false mismatches of his verification selfies, and discriminated against him via excessive verification checks.
Black Uber Eats Driver Allegedly Subjected to Excessive Photo Checks and Dismissed via FRT Results
2,021
A lawsuit
lawsuit
bias, content, false
ObjectId(62f3498bfa57b6f30ec2f015)
278
2022-08-07T00:00:00
[1866,1867,1868]
["meta"]
["meta"]
["jewish-people","blenderbot-3-users"]
The publicly launched conversational AI demo BlenderBot 3 developed by Meta was reported by its users and acknowledged by its developers to have “occasionally” made offensive and inconsistent remarks such as invoking Jewish stereotypes.
Meta’s BlenderBot 3 Chatbot Demo Made Offensive Antisemitic Comments
2,022
offensive and inconsistent remarks
inconsistent remarks
bias, content, false
ObjectId(62f2041c55716343a41ffe03)
275
2020-06-11T00:00:00
[1858,1860]
["facebook"]
["facebook"]
["facebook-users-sharing-photo-evidence-of-slavery","facebook-users"]
Facebook’s automated content moderation was acknowledged by a company spokesperson to have erroneously censored and banned Australian users from posting an article containing a 1890s photo of Aboriginal men in chains over nudity as historical evidence of slavery in Australia.
Facebook’s Moderation Algorithm Banned Users for Historical Evidence of Slavery
2,020
an article
article
bias, content, false
ObjectId(62f490d40658670483cb1691)
285
2022-07-18T00:00:00
[1888]
["google"]
["google"]
["google-lens-users"]
A book title by Korea’s first minister of culture was mistranslated into an offensive phrase by Google Lens’s camera-based translation feature allegedly due to its training on internet communications and a lack of context.
Google Lens’s Camera-Based Translation Feature Provided an Offensive Mistranslation of a Book Title in Korean
2,022
a lack
offensive phrase
bias, content, false
ObjectId(631975522a90260e9e4f5fc2)
330
2016-12-15T00:00:00
[2017]
["amazon"]
["amazon"]
["amazon-users"]
Amazon’s “Amazon’s Choice” algorithm recommended poor-quality defective products and were reportedly susceptible to manipulation by inauthentic reviews.
“Amazon’s Choice” Algorithm Failed to Recommend Functional Products and Prone to Review Manipulation
2,016
inauthentic reviews
inauthentic reviews
bias, content, false
ObjectId(63033a8281052814ccec9f7b)
299
2020-12-15T00:00:00
[1935]
["masayuki-nakamoto"]
["unknown"]
["japanese-pornographic-actors"]
A man allegedly unblurred, using deepfake technology, pixelated pornographic images and videos of pornographic actors, which violated Japan’s obscenity law requiring images of genitalia to be obscured.
Japanese Porn Depixelated by Man using Deepfake
2,020
pixelated pornographic images
pornographic images
bias, content, false
ObjectId(630f23807a8f2c2b4eece314)
323
2018-05-29T00:00:00
[1992,1193,2006,2514]
["tesla"]
["tesla"]
["laguna-beach-police-department"]
A Tesla sedan on Autopilot mode collided with a parked Laguna Beach Police Department car, resulting in minor injuries for its driver in Laguna Beach, California.
Tesla on Autopilot Crashed into Parked Police Car in California
2,018
its driver
autopilot mode
bias, content, false
ObjectId(62fa0340d2713a7e8de5b15c)
293
2022-06-03T00:00:00
[1907,1908,1909,1910,1996,1997,2016]
["cruise"]
["cruise"]
["cruise-passengers","toyota-prius-passengers"]
A Cruise autonomous vehicle was involved in a crash at an intersection in San Francisco when making a left turn in front of a Toyota Prius traveling in an opposite direction, which caused occupants in both cars to sustain injuries.
Cruise’s Self-Driving Car Involved in a Multiple-Injury Collision at an San Francisco Intersection
2,022
a crash
crash
year, risk, crash
ObjectId(631712bba7aa86620c9a0f2f)
325
2017-09-21T00:00:00
[2004,2005]
["facebook"]
["facebook"]
["olivia-solon","olivia-solon's-facebook-connections"]
An Instagram user’s image containing violent content was reportedly used as advertisement on Facebook allegedly via automated means.
Offensive Instagram User Content Displayed as Facebook Ad
2,017
violent content
violent content
bias, content, false
ObjectId(631842c8d84017ad42c8e764)
328
2020-06-13T00:00:00
[2014,2032]
["spamouflage-dragon"]
["unknown"]
["facebook-users","twitter-users","youtube-users"]
A pro-China propaganda campaign deployed fake accounts on Facebook, Twitter, and YouTube using GAN-synthesized faces to share and post comments on its content to gain wider circulation.
Fake Accounts Using GAN Faces Deployed by Propaganda Campaign on Social Platforms
2,020
fake accounts
fake accounts
bias, content, false
ObjectId(633d477a7d6871136596b7b5)
347
2021-05-06T00:00:00
[2060,2098,2099]
["waymo"]
["waymo"]
["waymo-passengers"]
A Waymo self-driving taxi car was shown on video stranded on a road in Arizona while carrying a passenger, suddenly drove away from the company's roadside assistance worker, and ended up being stuck farther down the road.
Waymo Self-Driving Taxi Behaved Unexpectedly, Driving away from Support Crew
2,021
a road
waymo self
bias, content, false
ObjectId(6342883afb9dbe61e43fc839)
350
2022-09-13T00:00:00
[2067,2094]
["serve-robotics"]
["serve-robotics"]
["police-investigators"]
A Serve Robotics delivery robot was shown on video rolling through a crime scene blocked off by police tape.
Delivery Robot Rolled Through Crime Scene
2,022
video rolling
video
bias, content, false
ObjectId(62f9f0c20127873b4a6fef3f)
291
2021-05-28T00:00:00
[1901,1902,1903,2242,2590,2604]
["tesla"]
["tesla"]
["california-department-of-motor-vehicles","tesla-customers","california-residents"]
California’s Department of Motor Vehicles (DMV) accused Tesla of false advertising in its promotion of Autopilot and Full Self-Driving (FSD) technologies, alleging the company to have made untrue or misleading claims with marketing language about the capabilities of its products.
Tesla Allegedly Misled Customers about Autopilot and FSD Capabilities
2,021
false advertising
false advertising
bias, content, false
ObjectId(62fa0c330127873b4a73e660)
294
2018-05-26T00:00:00
[1911,1912,1913,1914]
["tesla"]
["tesla"]
["you-you-xue","tesla-drivers"]
Autopilot was alleged by its Tesla Model 3 driver to have unexpectedly malfunctioned, veering right without warning and crashing into a road divider near Thessaloniki, Greece, which resulted in damages to its wheel and door but no injury to the driver.
Tesla Autopilot Allegedly Malfunctioned in a Non-Fatal Collision in Greece
2,018
its wheel
injury
bias, content, false
ObjectId(62f36a72c17fe69fd2162681)
280
2013-07-30T00:00:00
[1872,1873]
["coffee-meets-bagel"]
["coffee-meets-bagel"]
["coffee-meets-bagel-users-having-no-ethnicity-preference","coffee-meets-bagel-users"]
Users selecting “no preference” were shown by Coffee Meets Bagels’s matching algorithm more potential matches with the same ethnicity, which was acknowledged and justified by its founder as a means to maximize connection rate without sufficient user information.
Coffee Meets Bagel’s Algorithm Reported by Users Disproportionately Showing Them Matches of Their Own Ethnicities Despite Selecting “No Preference”
2,013
no preference
preference
bias, content, false
ObjectId(62f4bc3a77f5af9ce4624221)
287
2020-10-27T00:00:00
[2471]
["none"]
["openai","nabla"]
["nabla-customers"]
The French digital care company, Nabla, in researching GPT-3’s capabilities for medical documentation, diagnosis support, and treatment recommendation, found its inconsistency and lack of scientific and medical expertise unviable and risky in healthcare applications. This incident has been downgraded to an issue as it does not meet current ingestion criteria.
OpenAI’s GPT-3 Reported as Unviable in Medical Tasks by Healthcare Firm
2,020
its inconsistency
nabla
bias, content, false
ObjectId(62fc9ebb7f039040988f789c)
295
2018-11-08T00:00:00
[1915,1916,1917,2102,2103]
["new-york-police-department"]
["unknown"]
["ousmane-bah","nyc-black-people","nyc-black-young-people"]
New York Police Department (NYPD)’s facial recognition system falsely connected a Black teenager to a series of thefts at Apple stores, which resulted in his wrongful attempted arrest.
Wrongful Attempted Arrest for Apple Store Thefts Due to NYPD’s Facial Misidentification
2,018
a series
thefts
bias, content, false
ObjectId(630de2a9c9d2246424b8bc01)
320
2018-01-22T00:00:00
[1985,1986,1987,2007]
["tesla"]
["tesla"]
["tesla-drivers","culver-city-fire-department"]
A Tesla Model S operating on Autopilot mode crashed into the back of a parked fire truck on a freeway in Culver City, California in a non-fatal collision.
Tesla on Autopilot Collided with Parked Fire Truck on California Freeway
2,018
the back
back
bias, content, false
ObjectId(63182b55d84017ad42c5406f)
326
2014-12-09T00:00:00
[2008,2012,2013]
["facebook"]
["facebook"]
["facebook-users-having-posts-about-painful-events","facebook-users"]
Facebook’s “Year in Review” algorithm which compiled content in users’ past year as highlights inadvertently showed painful and unwanted memories to users, including death of family member.
Facebook Automated Year-in-Review Highlights Showed Users Painful Memories
2,014
painful and unwanted memories
death
bias, content, false
ObjectId(633412840b988074a09c7ee0)
335
2015-03-01T00:00:00
[2047,2090,2091,2092,2122,2123,2124,2125]
["uk-visas-and-immigration"]
["uk-visas-and-immigration","uk-home-office"]
["uk-visa-applicants-from-some-countries"]
UK Home Office's algorithm to assess visa application risks explicitly considered nationality, allegedly caused candidates to face more scrutiny and discrimination.
UK Visa Streamline Algorithm Allegedly Discriminated Based on Nationality
2,015
more scrutiny
discrimination
bias, content, false
ObjectId(62f9fe883c16ab9cc78cf737)
292
2021-09-01T00:00:00
[1904,1905,1906]
["apple"]
["apple"]
["silicon-valley-traffic-participants","silicon-valley-residents"]
Apple’s autonomous cars were reported to have bumped into curbs and struggled to stay in their lanes after crossing intersections during an on-road test drives near the company’s Silicon Valley headquarters.
Apple’s AVs Reportedly Struggled to Navigate Streets in Silicon Valley Test Drives
2,021
their lanes
curbs
bias, content, false
ObjectId(63033d3581052814cceda4de)
300
2022-01-15T00:00:00
[1936,1937]
["tiktok"]
["tiktok"]
["tiktok-male-teenager-users","tiktok-male-users","tiktok-teenage-users","tiktok-users","tiktok"]
TikTok’s “For You” algorithm allegedly boosted or was manipulated by an online personality to artificially boost his content which promotes extreme misogynistic views towards teenagers and men, despite breaking its rules.
TikTok's "For You" Algorithm Allegedly Abused by Online Personality to Promote Anti-Women Hate
2,022
extreme misogynistic views
extreme misogynistic views
bias, content, false
ObjectId(630dc67fb5b628f76fd964bc)
318
2021-01-13T00:00:00
[1977,1978]
["facebook"]
["facebook"]
["facebook-users"]
Facebook’s algorithmic recommendations reportedly continued showing advertisements for gun accessories and military gear, despite Facebook’s halt on weapons accessories ads following the US Capitol attack.
Facebook Recommended Military Gear Ads Despite Pause on Weapons Accessories Ads
2,021
Facebook’s halt
advertisements
bias, content, false
ObjectId(630dd3b7f5504b7e75aad64f)
319
2019-12-29T00:00:00
[1981,1982,1983,1984,1993]
["tesla"]
["tesla"]
["derrick-monet","jenna-monet","the-monets'-family"]
A Tesla on Autopilot mode failed to see a parked fire truck and crashed into its rear on an interstate in Indiana, causing the death of an Arizona woman.
Tesla on Autopilot Fatally Crashed into Parked Fire Truck in Indiana
2,019
the death
death
bias, content, false
ObjectId(632056a45857ae71d0616e65)
332
2016-04-05T00:00:00
[2033,2040,2041,2044]
["google"]
["google"]
["black-women","black-people","google-users"]
Google Image search reportedly showed disparate results along racial lines, featuring almost exclusively white women for “professional hairstyles” and black women for “unprofessional hairstyles” prompts.
Google Image Showed Racially Biased Results for “Professional” Hairstyles
2,016
racial lines
racial lines
bias, content, false
ObjectId(633a8fd3c70e5740bfbf5e4a)
340
2017-02-01T00:00:00
[2053]
["honda"]
["honda"]
["honda-customers"]
Honda's Collision Mitigation Braking System (CMBS) allegedly caused accidents to consumers due to frequent instances of false obstacle detection.
Honda's CMBS False Positives Allegedly Caused Accidents to Customers
2,017
false obstacle detection
accidents
bias, content, false
ObjectId(633aaae14178615128e595a2)
341
2017-04-06T00:00:00
[2054,2114,2198,2199,2200]
["nissan"]
["nissan"]
["nissan-drivers","traffic-participants"]
Nissan's Automatic Emergency Braking (AEB) feature was reported in a series of complaints for false positives and abrupt braking behaviors, endangering car occupants and traffic participants.
Nissan's "Automatic Emergency Braking" False Positives Posed Traffic Risks to Drivers
2,017
false positives
complaints
bias, content, false
ObjectId(63283d3b5ba952a8677615a3)
334
2014-10-01T00:00:00
[2046,2077]
["uber"]
["uber"]
["local-law-enforcement-officers"]
Uber developed a secret program "Greyball" which prevented known law enforcement officers in areas where its service violated regulations from receiving rides.
Uber Deployed Secret Program To Deny Local Authorities Rides
2,014
a secret program
greyball
bias, content, false
ObjectId(63036e545f65af7ded38efea)
305
2019-02-01T00:00:00
[1942,1943]
["youtube"]
["youtube"]
["youtube-users","youtube-climate-skeptic-users"]
YouTube’s recommendation system and its focus on views and watched time were alleged by an advocacy group to have driven people towards climate denial and misinformation videos.
YouTube’s Recommendation Algorithm Allegedly Promoted Climate Misinformation Content
2,019
its focus
misinformation videos
bias, content, false
ObjectId(62fcc26b77f5af9ce4dee4b0)
297
2020-02-20T00:00:00
[1921,1922,1923]
["smart-columbus"]
["easymile"]
["unnamed-woman-passenger"]
A self-driving shuttle deployed by Smart Columbus in Linden neighborhood unexpectedly stopped on the street, which caused a woman to fall onto the floor from her seat.
EasyMile Self-Driving Shuttle Unexpectedly Stopped Mid-Route, Injuring a Passenger
2,020
the floor
floor
bias, content, false
ObjectId(62f3c064867302aca4f382fc)
282
2020-10-03T00:00:00
[1879,1881,1972]
["facebook"]
["facebook"]
["the-seed-company-by-e.w.-gaze","businesses-on-facebook"]
Facebook’s content moderation algorithm misidentified and removed a Canadian business’s advertisement containing a photo of onions as products of overtly sexual content, which was later reinstated after review.
Facebook’s Algorithm Mistook an Advertisement of Onions as Sexual Suggestive Content
2,020
overtly sexual content
sexual content
bias, content, false
ObjectId(62f24ab4fa57b6f30e8dc738)
276
2022-01-01T00:00:00
[1864]
["bucheon-city-government"]
["unknown"]
["bucheon-citizens"]
Bucheon government’s use of facial recognition in analyzing CCTV footage, despite gaining wide public support, was scrutinized by privacy advocates and some lawmakers for collecting data without consent, and retaining and misusing data beyond pandemic needs.
Local South Korean Government’s Use of CCTV Footage Analysis via Facial Recognition to Track COVID Cases Raised Concerns about Privacy, Retention, and Potential Misuse
2,022
pandemic needs
misusing data
bias, content, false