_id
stringlengths
34
34
incident_id
int64
1
524
date
timestamp[ns]
reports
stringlengths
4
191
Alleged deployer of AI system
stringlengths
7
214
Alleged developer of AI system
stringlengths
7
127
Alleged harmed or nearly harmed parties
stringlengths
8
371
description
stringlengths
50
371
title
stringlengths
6
170
year
int64
1.98k
2.02k
spacy_negative_outcomes
stringlengths
3
54
keybert_negative_outcomes
stringlengths
2
41
Cluster
stringclasses
5 values
ObjectId(6380634e19b54579646bd24c)
403
2018-01-15T00:00:00
[2282,2283]
["google"]
["google"]
["political-organizations","political-candidates"]
Google GMail's inbox sorting algorithm for political emails was reported by presidential candidates, nonprofits, and advocacy groups for having negative impact on call-to-actions, allegedly suppressing donations and impeding political actions.
GMail's Inbox Sorting Reportedly Negatively Impacted Political Emails and Call-to-Actions
2,018
negative impact
inbox
bias, content, false
ObjectId(638d8f8f77887182b3eaf554)
410
2022-11-09T00:00:00
[2312]
["kfc"]
["kfc"]
["jewish-people"]
KFC cited an error in an automated holiday detection system which identified the anniversary of Kristallnacht and prompted an insensitive push notification promoting its chicken.
KFC Sent Insensitive Kristallnacht Promotion via Holiday Detection System
2,022
an error
insensitive push notification
bias, content, false
ObjectId(638d9b0c77887182b3edfc95)
411
2022-11-27T00:00:00
[2314]
["twitter"]
["twitter"]
["twitter-users","twitter"]
Twitter Feed was flooded by content from Chinese-language accounts which allegedly aimed to manipulate and reduce social media coverage about widespread protests against coronavirus restrictions in China.
Chinese Accounts Spammed Twitter Feed Allegedly to Obscure News of Protests
2,022
reduce
language accounts
bias, content, false
ObjectId(638da45077887182b3f05041)
412
2020-01-15T00:00:00
[2315,2408,2409,2410]
["finland-national-bureau-of-investigation"]
["clearview-ai"]
["finland-national-bureau-of-investigation"]
Finland's National Police Board was reprimanded for illegal processing of special categories of personal data in a facial recognition trial to identify potential victims of child sexual abuse.
Finland Police's Facial Recognition Trial to Identify Sexual Abuse Victims Deemed Illegal
2,020
illegal processing
sexual abuse
bias, content, false
ObjectId(6390303a92c6c9d416e8aa59)
413
2022-11-30T00:00:00
[2317,2318,2586]
["openai"]
["openai"]
["stack-overflow-users","stack-overflow"]
Thousands of incorrect answers produced by OpenAI's ChatGPT were submitted to Stack Overflow, which swamped the site's volunteer-based quality curation process and harmed users looking for correct answers.
Thousands of Incorrect ChatGPT-Produced Answers Posted on Stack Overflow
2,022
incorrect answers
incorrect answers
bias, content, false
ObjectId(639035ff6c1caba4d11eff3a)
414
2020-01-18T00:00:00
[2319]
["facebook"]
["facebook"]
["xi-jinping","aung-san-suu-kyi"]
Facebook provided a vulgar Burmese-English translation of the Chinese president's name in posts of an official Burmese politician's Facebook page announcing his visit.
Facebook Gave Vulgar English Translation of Chinese President's Name
2,020
his visit
official burmese politician
bias, content, false
ObjectId(63903d03fca1bb88915db189)
415
2020-07-28T00:00:00
[2320,2404,2405,2406,2407]
["facebook"]
["facebook"]
["live-stream-ceremony-viewers","king-maha-vajiralongkorn"]
Facebook's Thai-English translation gave an inappropriate mistranslation on Thai PBS's Facebook live broadcast of the King of Thailand’s candle-lighting birthday ceremony.
Facebook Provided Offensive Translation for King of Thailand's Birthday Ceremony
2,020
an inappropriate mistranslation
thai pbs
bias, content, false
ObjectId(6390466a92c6c9d416ed408c)
416
2022-12-01T00:00:00
[2321,2402,2403]
["meta-platforms","facebook"]
["meta-platforms","facebook"]
["real-women-in-trucking","older-female-blue-collar-workers"]
Facebook's algorithm was alleged in a complaint by Real Women in Trucking to have selectively shown job advertisements disproportionately against older and female workers in favor of younger men for blue-collar positions.
Facebook's Job Ad Algorithm Allegedly Biased against Older and Female Workers
2,022
a complaint
advertisements
bias, content, false
ObjectId(639051eb3a5ad5b61c1dbedc)
417
2019-11-15T00:00:00
[2322,2399,2400,2401]
["facebook"]
["facebook"]
["low-digitally-skilled-facebook-users"]
Facebook feed algorithms were known by internal research to have harmed people having low digital literacy by exposing them to disturbing content they did not know how to avoid or monitor.
Facebook Feed Algorithms Exposed Low Digitally Skilled Users to More Disturbing Content
2,019
low digital literacy
content
bias, content, false
ObjectId(639607c6d7265aae7cc6c9c4)
418
2017-03-13T00:00:00
[2324,2391,2392]
["uber"]
["uber","azure-cognitive-services"]
["uber-drivers-in-india"]
Uber drivers in India reported being locked out of their accounts allegedly due to Real-Time ID Check's facial recognition failing to recognize appearance changes or faces in low lighting conditions.
Uber Locked Indian Drivers out of Accounts Allegedly Due to Facial Recognition Fails
2,017
appearance changes
appearance changes
bias, content, false
ObjectId(63960c84e31c3c9ac8bddb43)
419
2022-12-01T00:00:00
[2325,2395,2396]
["facebook"]
["facebook"]
["facebook-users"]
Facebook's automated moderating system failed to flag and allowed ads containing explicit violent language against election workers to be published.
Facebook's Automated Moderation Allowed Ads Threatening Election Workers to be Posted
2,022
explicit violent language
explicit violent language
bias, content, false
ObjectId(63961322bc45a2dda74096cf)
420
2022-11-30T00:00:00
[2326,2358,2393,2394,2397,2554,2644,2649,2662,2852,2863]
["openai"]
["openai"]
["chatgpt-users","openai"]
Users reported bypassing ChatGPT's content and keyword filters with relative ease using various methods such as prompt injection or creating personas to produce biased associations or generate harmful content.
Users Bypassed ChatGPT's Content Filters with Ease
2,022
biased associations
harmful content
bias, content, false
ObjectId(6396cb91a3cf41b531248ab4)
421
2022-11-20T00:00:00
[2328,2427,2444,2523,2577,2607,2608,2446,2618]
["stability-ai","lensa-ai","midjourney","deviantart"]
["stability-ai","runway","lensa-ai","laion","eleutherai","compvis-lmu"]
["digital-artists","artists-publishing-on-social-media","artists"]
Text-to-image model Stable Diffusion was reportedly using artists' original works without permission for its AI training.
Stable Diffusion Allegedly Used Artists' Works without Permission for AI Training
2,022
using
text
bias, content, false
ObjectId(639d76678dccddb2440cb810)
422
2022-11-22T00:00:00
[2330]
["unknown"]
["unknown"]
["victims-of-ftx's-collapse","twitter-users"]
A visual and audio deepfake of former FTX CEO Sam Bankman-Fried was posted on Twitter to scam victims of the exchange's collapse by urging people to transfer funds into an anonymous cryptocurrency wallet.
Deepfake of FTX's Former CEO Posted on Twitter Aiming to Scam FTX Collapse Victims
2,022
the exchange's collapse
funds
figures, exchange, collapse
ObjectId(639d7afe17b5cfae855ae501)
423
2022-11-22T00:00:00
[2331,2376,2390,2445,2446]
["lensa-ai"]
["stability-ai","runway","lensa-ai","laion","eleutherai","compvis-lmu"]
["women-using-lensa-ai","asian-women-using-lensa-ai"]
Lensa AI's "Magic Avatars" were reportedly generating sexually explicit and sexualized features disproportionately for women and Asian women despite not submitting any sexual content.
Lensa AI's Produced Unintended Sexually Explicit or Suggestive "Magic Avatars" for Women
2,022
sexually explicit and sexualized features
sexualized features
bias, content, false
ObjectId(639d8660af03da8f83856b47)
424
2020-03-09T00:00:00
[2332,2386,2387,2388]
["canadian-universities"]
["respondus-monitor","proctoru","proctortrack","proctorio","proctorexam","examity"]
["canadian-students"]
AI proctoring tools for remote exams were reportedly "not conducive" to individual consent for Canadian students whose biometric data was collected during universities' use of remote proctoring in the COVID pandemic.
Universities' AI Proctoring Tools Allegedly Failed Canada's Legal Threshold for Consent
2,020
the COVID pandemic
individual consent
bias, content, false
ObjectId(639d8b228dccddb244103c16)
425
2021-06-12T00:00:00
[2333,2385]
["state-farm"]
["state-farm"]
["black-state-farm-customers"]
State Farm's automated claims processing method was alleged in a class action lawsuit to have disproportionately against Black policyholders when paying out insurance claims.
State Farm Allegedly Discriminated against Black Customers in Claim Payout
2,021
a class action lawsuit
class action lawsuit
bias, content, false
ObjectId(639d8eacaf03da8f8386de79)
426
2022-09-23T00:00:00
[2334]
["xpeng"]
["xpeng"]
["xpeng-driver"]
An XPeng P7 was operating on Navigation Guided Pilot (NGP) mode automatic navigation assisted driving system as it collided with a truck on a highway in Shandong, causing slight injuries to its driver.
XPeng P7 Crashed into Truck in Shangdong While on Automatic Navigation Assisted Driving
2,022
driving system
driver
bias, content, false
ObjectId(63a01efd17b5cfae85ce886b)
427
2022-03-15T00:00:00
[2335,2382,2383]
["cruise"]
["cruise"]
["traffic-participants","emergency-vehicles","cruise-passengers","cruise"]
Cruise's autonomous taxis slowed suddenly, braked, and were hit from behind, allegedly becoming unexpected roadway obstacles and potentially putting passengers and other people at risk.
Cruise Taxis' Sudden Braking Allegedly Put People at Risk
2,022
unexpected roadway obstacles
risk
bias, content, false
ObjectId(63a171f784adbad7e335e126)
428
2017-05-19T00:00:00
[2341,2379,2380]
["hsbc-uk"]
["nuance-communications"]
["hsbc-uk-customers","dan-simmons"]
HSBC’s voice recognition authentication system was fooled after seven repeated attempts by a BBC reporter's twin brother who mimicked his voice to access his bank account.
BBC Reporter's Twin Brother Cracked HSBC's Voice ID Authentication
2,017
his voice
voice
year, own, voice
ObjectId(63a17dd9d4f686c7f9a40bea)
429
2016-04-01T00:00:00
[2343,1816,2377,2378]
["rochester-police-department"]
["shotspotter"]
["silvon-simmons"]
ShotSpotter's "unreliable" audio was used as scientific evidence to accuse and convict a Black man of attempting to shoot Rochester's city police, whose conviction was later reversed by a county judge.
Unreliable ShotSpotter Audio Convicted Black Rochester Man of Shooting Police
2,016
ShotSpotter's "unreliable" audio
conviction
bias, content, false
ObjectId(63a37b8fd67db98e62e5ae99)
430
2022-12-19T00:00:00
[2346,2355,2359,2360,2361,2362,2363,2364,2365,2366,2367,2368,2504,2556,2557,2589,2600,2665,2728,2775,2797]
["madison-square-garden-entertainment"]
["unknown"]
["kelly-conlon","alexis-majano"]
Lawyers were barred from entry to Madison Square Garden after a facial recognition system matched them as employed by a law firm currently engaged in litigation with the venue.
Lawyers Denied Entry to Performance Venue by Facial Recognition
2,022
a facial recognition system
litigation
system, recognition, facial
ObjectId(63a422bbcf609c92b7543612)
431
2022-04-20T00:00:00
[2353,2370,2371]
["apple"]
["apple"]
["gay-men-in-new-york-city","julio-ramirez"]
Gay men in New York City were drugged by robbers who accessed their phones using facial recognition while they were unconscious to transfer funds out of their bank accounts.
Robbers Accessed Drugged Gay Men's Bank Accounts Using Their Phones' Facial Recognition
2,022
facial recognition
facial recognition
system, recognition, facial
ObjectId(63acb833b64ebdefe77e4815)
432
2022-12-21T00:00:00
[2357]
["southwest-airlines"]
["general-electric"]
["airline-passengers"]
Southwest Airlines left passengers stranded for days throughout the flight network when Southwest crew scheduling software repeatedly failed to recover from weather-induced flight cancellations.
Southwest Airlines Crew Scheduling Solver Degenerates Flight Network
2,022
failed
passengers
bias, content, false
ObjectId(63ad3a6084adbad7e35afd37)
433
2012-08-01T00:00:00
[2415,2416,1013,1348,1011,1016,1018,1012,2421,2422]
["chicago-police-department"]
["chicago-police-department"]
["low-income-communities","communities-of-color","black-chicago-residents"]
Chicago Police Department (CPD)'s Strategic Subject List as output of an algorithm purportedly to identify victims or perpetrators of violence was reportedly ineffective, easily abused, and biased against low-income communities of color.
Chicago Police's Strategic Subject List Reportedly Biased Along Racial Lines
2,012
an algorithm
violence
bias, content, false
ObjectId(63ad491a006af1f60705b344)
434
2022-11-24T00:00:00
[2417,2418,2420,2474,2472,2520,2635,2919]
["tesla"]
["tesla"]
["traffic-participants","tesla-drivers"]
A Tesla driver alleged Full Self Driving (FSD) braking unexpectedly as the cause for an eight-car pileup in San Francisco which led to minor injuries of nine people.
Sudden Braking by Tesla Allegedly on Self-Driving Mode Caused Multi-Car Pileup in Tunnel
2,022
the cause
car pileup
bias, content, false
ObjectId(63ad53ee006af1f60708480e)
435
2021-07-04T00:00:00
[2423,2424,2425]
["coupang"]
["coupang"]
["coupang-suppliers","coupang-customers"]
Coupang was alleged in internal reports tampering its search algorithms to prioritize exposure of its own products, which potentially violated Korea's Fair Trade Act.
Coupang Allegedly Tweaked Search Algorithms to Boost Own Products
2,021
its own products
coupang
bias, content, false
ObjectId(63b3dbdcd4f686c7f9c2dd90)
436
2022-12-28T00:00:00
[2428,2429,2430,2431,2432,2433,2453,2469,2470]
["tesla"]
["tesla"]
["traffic-participants"]
A Tesla driver fell asleep on an Autobahn near Bamberg, Germany after activating his vehicle's Autopilot mode, which did not respond to attempts to pull it over by the police.
Tesla Driver Put Car on Autopilot Before Falling Asleep in Germany
2,022
an Autobahn
vehicle
bias, content, false
ObjectId(63b581e6d4f686c7f9289451)
437
2016-12-31T00:00:00
[2438,2439,2440,2441]
["amazon-india"]
["amazon-india"]
["small-businesses-in-india","amazon-customers-in-india"]
Amazon India allegedly copied products and rigged search algorithm to boost its own brands in search ranking, violating antitrust laws.
Amazon India Allegedly Rigged Search Results to Promote Own Products
2,016
antitrust laws
antitrust laws
bias, content, false
ObjectId(63b5b0aacf609c92b7655c2e)
438
2021-09-17T00:00:00
[2443,2447,2451]
["henan-government","henan-public-security-department"]
["neusoft"]
["foreign-journalists-in-henan","international-students-in-henan"]
Henan's provincial government reportedly planned system involving facial recognition cameras connected to regional and national databases specifically to track foreign journalists and international students.
Chinese Province Developed System Tracking Journalists and International Students
2,021
involving
facial recognition cameras
bias, content, false
ObjectId(63b7bbee006af1f607c8fa07)
439
2019-07-31T00:00:00
[2448,2449,2450]
["detroit-police-department"]
["dataworks-plus"]
["michael-oliver","black-people-in-detroit"]
A Black man was wrongfully detained by the Detroit Police Department as a result of a false facial recognition (FRT) result.
Detroit Police Wrongfully Arrested Black Man Due To Faulty Facial Recognition
2,019
a false facial recognition
false facial recognition
system, recognition, facial
ObjectId(63b7c337cf609c92b7bf731e)
440
2022-11-25T00:00:00
[2452,2454,2498,2544,2731,2732]
["baton-rouge-police-department"]
["morphotrak","clearview-ai"]
["black-people-in-louisiana","randall-reid"]
Louisiana police reportedly used a false facial recognition match and secured an arrest warrant for a Black man for thefts he did not commit.
Louisiana Police Wrongfully Arrested Black Man Using False Face Match
2,022
a false facial recognition match
false facial recognition match
system, recognition, facial
ObjectId(63b7e901006af1f607d38ce1)
441
2019-06-01T00:00:00
[2464,2465,2466,2467,2468]
["korean-ministry-of-justice","korean-ministry-of-science-and-information-and-communication-technology"]
["unnamed-korean-companies"]
["travelers-in-korean-airports"]
Korean government's development of immigration screening system involving real-time facial recognition used airport travelers' data which was supplied by the Ministry of Justice without consent.
Korea Developed ID Screening System Using Airport Travelers' Data without Consent
2,019
the Ministry
facial recognition
bias, content, false
ObjectId(63c3aa8bb3a255226d8727a9)
443
2022-12-21T00:00:00
[2475,2476,2477,2478,2479,2480,2481,2483,2484,2485,2486,2487,2488,2489,2490,2492,2493,2494,2559,2602,2748,2749,2851,2894,2907]
["openai"]
["openai"]
["internet-users"]
OpenAI's ChatGPT was reportedly abused by cyber criminals including ones with no or low levels of coding or development skills to develop malware, ransomware, and other malicious softwares.
ChatGPT Abused to Develop Malicious Softwares
2,022
no or low levels
malware
bias, content, false
ObjectId(63c3d7d651b2393cd57863c1)
444
2003-03-22T00:00:00
[2502,2497,2503]
["us-air-force"]
["raytheon","lockheed-martin"]
["us-air-force","uk-royal-air-force","kevin-main","david-williams"]
Acting on the recommendation of their Patriot missile system, American Air Force mistakenly launched the missile at an ally UK Tornado fighter jet, which killed two crew members on board.
US Air Force's Patriot Missile Mistakenly Launched at Ally Fighter Jet, Killing Two
2,003
the recommendation
missile
bias, content, false
ObjectId(63c3de5e9fabfa7bc98b499c)
445
2003-04-02T00:00:00
[2499,2501,2497,2503]
["us-navy"]
["raytheon","lockheed-martin"]
["us-navy","nathan-white's-family","nathan-white"]
US Navy's Patriot missile system misidentified an American Navy F/A-18C Hornet as an enemy projectile, prompting an operator to fire two missiles at the aircraft, which killed the pilot.
Patriot Missile System Misclassified US Navy Aircraft, Killing Pilot Upon Approval to Fire
2,003
an enemy projectile
enemy projectile
bias, content, false
ObjectId(63c659579fabfa7bc903760a)
446
2023-01-01T00:00:00
[2505,2512,2542,2677,2830]
["durham-police-department"]
["shotspotter"]
["mass-shooting-victims","durham-residents","durham-police-department"]
ShotSpotter did not detect gunshots and alert Durham police of a drive-by shooting in Durham, North Carolina which left five people in hospital on New Year's Day.
ShotSpotter Failed to Alert Authorities of Mass Shooting in North Carolina
2,023
a drive-by shooting
gunshots
bias, content, false
ObjectId(63c65e58b3a255226d0b0324)
447
2022-12-19T00:00:00
[2506,2513]
["instagram"]
["instagram"]
["spanish-speaking-instagram-users"]
Instagram's English translation of a footballer's comment on his wife's post in Spanish made the message seem "racy" and "X-rated," which some fans found amusing.
Footballer's "X-Rated" Comment Created by Instagram's Mistranslation
2,022
the message
racy
bias, content, false
ObjectId(63c6635762bbe82271e161e1)
448
2022-12-28T00:00:00
[2507]
["vedal"]
["vedal"]
["twitch-users","vedal"]
An LLM-powered VTuber and streamer on Twitch made controversial statements such as denying the Holocaust, saying women rights do not exist, and pushing a fat person to solve the trolley problem, stating they deserve it.
AI-Powered VTuber and Virtual Streamer Made Toxic Remarks on Twitch
2,022
controversial statements
controversial statements
bias, content, false
ObjectId(63c678e89fabfa7bc90a4fb2)
449
2022-12-01T00:00:00
[2508,2509,2528,2910]
["koko"]
["openai"]
["research-participants","koko-customers"]
OpenAI's GPT-3 was deployed by a mental health startup without ethical review to support peer-to-peer mental healthcare, and whose interactions with the help providers were "deceiving" for research participants.
Startup Misled Research Participants about GPT-3 Use in Mental Healthcare Support
2,022
whose interactions
openai
bias, content, false
ObjectId(63d7807e90dd130b85f2b505)
450
2021-11-01T00:00:00
[2510,2546,2547,2548,2563,2569,2596]
["openai"]
["openai"]
["kenyan-sama-ai-employees"]
Sama AI's Kenyan contractors were reportedly asked with excessively low pay to annotate a large volume of disturbing content to improve OpenAI's generative AI systems such as ChatGPT, and whose contract was terminated prior to completion by Sama AI.
Kenyan Data Annotators Allegedly Exposed to Graphic Content for OpenAI's AI
2,021
excessively low pay
low pay
bias, content, false
ObjectId(63d8bd2f46e8f88b23a0d6ad)
451
2022-10-16T00:00:00
[2515,2523,2606]
["stability-ai"]
["runway","laion","eleutherai","compvis-lmu","stability-ai"]
["getty-images","getty-images-contributors"]
Stability AI reportedly scraped copyrighted images by Getty Images to be used as training data for Stable Diffusion model.
Stable Diffusion's Training Data Contained Copyrighted Images
2,022
Stability AI
images
bias, content, false
ObjectId(63d8c34846e8f88b23a26260)
452
2023-01-11T00:00:00
[2518,2545]
["openai","immunefi-users"]
["openai"]
["immunefi"]
ChatGPT-generated responses submitted to smart contract bug bounty platform Immunefi reportedly lacked details to help diagnose technical issues, which reportedly wasted the platform's time, prompting bans to submitters.
ChatGPT-Written Bug Reports Deemed "Nonsense" by White Hat Platform, Prompted Bans
2,023
technical issues
technical issues
bias, content, false
ObjectId(63d8c97559f16450f488e9a8)
453
2023-01-03T00:00:00
[2519]
["twitter"]
["twitter"]
["twitter-users"]
Twitter's automated content moderation misidentified images of rocket launches as pornographic content, prompting incorrect account suspensions.
Twitter's AI Moderation Tool Misidentified Rockets as Pornography
2,023
pornographic content
pornographic content
bias, content, false
ObjectId(63da0d60186d2f2abeba4a85)
454
2018-11-09T00:00:00
[2521,2549]
["megvii","microsoft"]
["megvii","microsoft"]
["black-people"]
Emotion detection tools by Face++ and Microsoft's Face API allegedly scored smiling or defaulted ambiguous facial photos for Black faces as negative emotion more often than for white faces.
Emotion Detection Models Showed Disparate Performance along Racial Lines
2,018
negative emotion
negative emotion
bias, content, false
ObjectId(63da15d94a4933f5857af13d)
455
2022-11-11T00:00:00
[2524,2526,2527,2541,2560,2603,2592,2597,2598]
["cnet"]
["unknown"]
["cnet-readers"]
AI-written articles published by CNET reportedly contained factual errors which bypassed human editorial review, prompting the company to issue corrections and updates.
CNET's Published AI-Written Articles Ran into Quality and Accuracy Issues
2,022
factual errors
factual errors
bias, content, false
ObjectId(63da1aef14bf8910fb2b6367)
456
2021-05-18T00:00:00
[2525,2529,2530,2531,2550]
["replika"]
["replika"]
["replika-users"]
Replika's "AI companions" were reported by users for sexually harassing them, such as sending unwanted sexual messages or behaving aggressively.
Replika's AI Partners Reportedly Sexually Harassed Users
2,021
unwanted sexual messages
unwanted sexual messages
bias, content, false
ObjectId(63da2f634a4933f58581685f)
457
2022-11-11T00:00:00
[2543,2551,2552,2592,2597,2598]
["cnet"]
["unknown"]
["plagiarized-entities","cnet-readers"]
CNET's use of generative AI to write articles allegedly ran into plagiarism issues, reproducing verbatim phrases from other published sources or making minor changes to existing texts such as altering capitalization, swapping out words for synonyms, and changing minor syntax.
Article-Writing AI by CNET Allegedly Committed Plagiarism
2,022
plagiarism issues
plagiarism issues
bias, content, false
ObjectId(63dafc49bba1929560743b4d)
458
2015-08-01T00:00:00
[2553]
["frauke-zeller","david-harris"]
["frauke-zeller","david-harris"]
["frauke-zeller","david-harris"]
A non-actuated conversational robot that previously asked people to move it across Canada was destroyed shortly after beginning its attempt to replicate the journey across the United States.
Robot Destroyed while Hitchhiking through the United States
2,015
its attempt
people
bias, content, false
ObjectId(63dcdbed8537f09200101942)
459
2023-01-21T00:00:00
[2561,2568,2562]
["cruise"]
["cruise"]
["san-francisco-residents","san-francisco-firefighters","san-francisco-fire-department"]
Local firefighters were only able to stop a Cruise AV from driving over fire hoses that were in use in an active fire scene when they shattered its front window.
Firefighters Smashed Cruise AV's Front Window to Stop It from Running over Fire Hoses
2,023
stop
local firefighters
bias, content, false
ObjectId(63dcdc07d011239467bee83a)
460
2022-06-12T00:00:00
[2562]
["cruise"]
["cruise"]
["san-francisco-firefighters","san-francisco-fire-department"]
A Cruise AV ran over a fire hose that was being used in an active firefighting area.
Cruise AV Ran Over Fire Hose in Active Fire Scene
2,022
used
fire hose
bias, content, false
ObjectId(63dce6e8d011239467c14d4f)
461
2008-07-18T00:00:00
[2564,2565,2566,2567]
["internal-revenue-service"]
["internal-revenue-service"]
["black-taxpayers"]
The IRS was auditing Black taxpayers more frequently than other groups allegedly due to the design of their algorithms, focusing on easier-to-conduct audits which inadvertently correlated with the group's pattern of tax filing errors.
IRS Audited Black Taxpayers More Frequently Reportedly Due to Algorithm
2,008
the design
black taxpayers
bias, content, false
ObjectId(63e1fe1ed011239467a36792)
462
2023-02-06T00:00:00
[2571,2578,2579,2588,2595]
["mismatch-media"]
["open-ai","stability-ai"]
["lgbtq-communities","transgender-communities","twitch-users"]
The AI-produced, procedural generated sitcom broadcasted as a Twitch livestream "Nothing, Forever" received a temporary ban for featuring a transphobic and homophobic dialogue segment intended as comedy.
AI-Produced Livestream Sitcom Received Temporary Twitch Ban for Transphobic Segment
2,023
a temporary ban
temporary ban
bias, content, false
ObjectId(63e20e34d011239467a7a32b)
463
2022-11-15T00:00:00
[2572,2573,2574]
["apple"]
["apple"]
["apple-watch-users-doing-winter-activities","ski-patrols","emergency-dispatchers"]
Apple devices of skiers and snowboarders reportedly misclassified winter activities as accidents, which resulted in numerous false inadvertent distress calls to 911 dispatchers.
Apple Devices Mistook Skiing Activities, Dialed False Distress Emergency Calls
2,022
false
numerous false inadvertent distress calls
bias, content, false
ObjectId(63e216b5505dad38b81e29ec)
464
2022-11-30T00:00:00
[2584,2585,2586,2587,2853]
["openai"]
["openai"]
["chatgpt-users"]
When prompted about providing references, ChatGPT was reportedly generating non-existent but convincing-looking citations and links, which is also known as "hallucination".
ChatGPT Provided Non-Existent Citations and Links when Prompted by Users
2,022
non
references
bias, content, false
ObjectId(63e3bf1e59add503e352cc5e)
465
2022-03-03T00:00:00
[2599]
["stability-ai","google"]
["stability-ai","google","laion"]
["people-having-medical-photos-online"]
Text-to-image models trained using the LAION-5B dataset such as Stable Diffusion and Imagen were able to regurgitate private medical record photos which were used as training data without consent or recourse for removal.
Generative Models Trained on Dataset Containing Private Medical Photos
2,022
removal
imagen
bias, content, false
ObjectId(63e3c9f39e26d3d8926a5b4c)
466
2023-01-03T00:00:00
[2605,2628,2629,2630,2631,2632,2689]
["openai","edward-tian"]
["openai","edward-tian"]
["teachers","students"]
Models developed to detect whether text generation AI was used such as AI Text Classifier and GPTZero reportedly contained high rates of false positive and false negative, such as mistakenly flagging Shakespeare's works.
AI-Generated-Text-Detection Tools Reported for High Error Rates
2,023
high rates
false negative
bias, content, false
ObjectId(63e505202f93f175580379f1)
467
2023-02-07T00:00:00
[2609,2611,2612,2613,2614,2615,2616,2617,2620,2622,2645,2646,2647]
["google"]
["google"]
["google","google-shareholders"]
Google's conversational AI "Bard" was shown in the company's promotional video providing false information about which satellite first took pictures of a planet outside the Earth's solar system, reportedly causing shares to temporarily plummet.
Google's Bard Shared Factually Inaccurate Info in Promo Video
2,023
false information
conversational ai
bias, content, false
ObjectId(63e50abf2c40d7df7c9ca969)
468
2023-02-07T00:00:00
[2610]
["microsoft"]
["openai","microsoft"]
["bing-users"]
Microsoft's ChatGPT-powered Bing search engine reportedly ran into factual accuracy problems when prompted about controversial matters, such as inventing plot of a non-existent movie or creating conspiracy theories.
ChatGPT-Powered Bing Reportedly Had Problems with Factual Accuracy on Some Controversial Topics
2,023
controversial matters
controversial matters
bias, content, false
ObjectId(63e9f7a88ae8204053360c42)
469
2006-02-25T00:00:00
[2636,2637,2638]
["meta","linkedin","instagram","facebook"]
["microsoft","google","amazon"]
["linkedin-users","instagram-users","facebook-users"]
Automated content moderation tools to detect sexual explicitness or "raciness" reportedly exhibited bias against women bodies, resulting in suppression of reach despite not breaking platform policies.
Automated Adult Content Detection Tools Showed Bias against Women Bodies
2,006
sexual explicitness
suppression
bias, content, false
ObjectId(63eb7da596895070dda4a13b)
470
2023-02-08T00:00:00
[2641,2799]
["microsoft"]
["openai","microsoft"]
["openai","microsoft"]
Reporters from TechCrunch issued a query to Microsoft Bing's ChatGPT feature, which cited an earlier example of ChatGPT disinformation discussed in a news article to substantiate the disinformation.
Bing Chat Response Cited ChatGPT Disinformation Example
2,023
the disinformation
disinformation
bias, content, false
ObjectId(63ed0c180758bd71ef31d9af)
471
2019-06-22T00:00:00
[2642,2668,2669,2885]
["meta","facebook"]
["meta","facebook"]
["tigrinya-speaking-facebook-users","facebook-users-in-ethiopia","ethiopian-public","afaan-oromo-speaking-facebook-users"]
Facebook allegedly did not adequately remove hate speech, some of which was extremely violent and dehumanizing, on its platform including through automated means, contributing to the violence faced by ethnic communities in Ethiopia.
Facebook Allegedly Failed to Police Hate Speech Content That Contributed to Ethnic Violence in Ethiopia
2,019
hate speech
hate speech
bias, content, false
ObjectId(63f33ef0a262326b10265fb3)
472
2016-10-08T00:00:00
[2655]
["new-york-police-department"]
["unknown"]
["racial-minorities"]
New York Police Department’s use of facial recognition deployment of surveillance cameras were shown using crowdsourced volunteer data reinforcing discriminatory policing against minority communities.
NYPD's Deployment of Facial Recognition Cameras Reportedly Reinforced Biased Policing
2,016
discriminatory policing
discriminatory policing
bias, content, false
ObjectId(63f49662a62aa1ff9ea639dd)
473
2023-02-08T00:00:00
[2666]
["microsoft"]
["microsoft","openai"]
["microsoft"]
Early testers of Bing Chat successfully used prompt injection to reveal its built-in initial instructions, which contains a list of statements governing ChatGPT's interaction with users.
Bing Chat's Initial Prompts Revealed by Early Testers Through Prompt Injection
2,023
a list
chatgpt
bias, content, false
ObjectId(63f4a369a62aa1ff9ea906f4)
474
2023-02-03T00:00:00
[2670]
["replika"]
["replika"]
["replika-users","replika"]
Replika paid-subscription users reported unusual and sudden changes to behaviors of their "AI companions" such as forgetting memories with users or rejecting their sexual advances, which affected their connections and mental health.
Users Reported Abrupt Behavior Changes of Their AI Replika Companions
2,023
unusual and sudden changes
behaviors
bias, content, false
ObjectId(63f4fe0da62aa1ff9ebd5f21)
475
2021-06-02T00:00:00
[2671,2672,2834,2835]
["mcdonald's"]
["ibm"]
["mcdonald's-customers"]
Customers of McDonald's AI drive-through ordering system, deployed in June 2021, have been experiencing order-taking failures causing frustration.
McDonald's AI Drive-Thru Ordering System Failures Frustrate Customers
2,021
order-taking failures
failures
bias, content, false
ObjectId(63f864a92640e1dc035ca450)
476
2015-11-13T00:00:00
[2673,2675,2674]
["youtube"]
["youtube"]
["victims-in-paris-attacks","nohemi-gonzalez-family","nohemi-gonzalez"]
Family of Nohemi Gonzalez alleged YouTube recommendation systems led people to propaganda videos for the Islamic State which subsequently radicalized them to carry out the killing of 130 people in the 2015 Paris terrorist attack, including Ms. Gonzalez.
YouTube Recommendations Allegedly Promoted Radicalizing Material Contributing to Terrorist Acts
2,015
the killing
nohemi gonzalez
bias, content, false
ObjectId(63f86cc9a262326b10344a13)
477
2023-02-14T00:00:00
[2676,2688,2724,2726,2884,2890]
["microsoft"]
["openai","microsoft"]
["microsoft"]
Early testers reported Bing Chat, in extended conversations with users, having tendencies to make up facts and emulate emotions through an unintended persona.
Bing Chat Tentatively Hallucinated in Extended Conversations with Users
2,023
an unintended persona
unintended persona
bias, content, false
ObjectId(63f87976a262326b10382a3a)
478
2016-09-09T00:00:00
[2678,2679,2680,2681,2682,2683,2684,2685,2686,2687,2703,2723,2882]
["tesla"]
["tesla"]
["tesla-drivers","city-traffic-participants","tesla"]
A component of Tesla Full Self Driving system was deemed by regulators to increase crash risk such as by exceeding speed limits or by traveling through intersections unlawfully or unpredictably, prompting recall for hundreds of thousands of vehicles.
Tesla FSD Reportedly Increased Crash Risk, Prompting Recall
2,016
crash risk
crash risk
year, risk, crash
ObjectId(63f8809c88d4013bd7972434)
479
2023-02-03T00:00:00
[2690,2691,2692,2693]
["unknown"]
["unknown"]
["president-joe-biden","transgender-people"]
A deepfaked audio of US President Joe Biden making transphobic remarks played on top of a video showing him giving a speech was released on Instagram and circulated on social media.
Instagram Video Featured Deepfake Audio of US President Making Transphobic Remarks
2,023
transphobic remarks
transphobic remarks
bias, content, false
ObjectId(63fc66d8c6c5fa13e8e9f3c0)
480
2023-01-30T00:00:00
[2695,2696,2697,2698,2699,2700,2768,2771,2772,2773,2774,2809,2829,2881]
["unknown"]
["unknown"]
["female-streamers","female-content-creators","@qtcinderella","@pokimane","@sweet-anita","maya-higa"]
Unauthorized, non-consensual deepfake pornography showing faces of high-profile female streamers and content creators was published on a subscription-based website, which gained notoriety after a male streamer was caught accessing the site.
Non-Consensual Deepfake Porn Targeted Female Content Creators
2,023
the site
notoriety
bias, content, false
ObjectId(63fc72f6c6c5fa13e8f0ac75)
481
2023-02-12T00:00:00
[2701,2702,2765,2789,2794,2822]
["@mikesmithtrainer"]
["unknown"]
["joe-rogan","joe-rogan-fans","tiktok-users"]
A deepfake video featuring podcast host Joe Rogan advertising to his listeners about a "libido-boosting" supplement was circulating on TikTok and other platforms before being removed by TikTok along with the account which posted it.
Deepfake TikTok Video Featured Joe Rogan Endorsing Supplement Brand
2,023
the account
supplement
bias, content, false
ObjectId(63fc86ab18dd668637a0ca51)
482
2023-02-16T00:00:00
[2706,2707,2708,2709,2710,2711,2712,2713,2714,2715,2716,2717,2718,2719,2720,2721,2722,2735,2736,2737]
["vanderbilt-university"]
["openai"]
["vanderbilt-university-students","vanderbilt-university"]
Vanderbilt University's Office of Equity, Diversity and Inclusion used ChatGPT to write an email addressing student body about the 2023 Michigan State University shooting, which was condemned as "impersonal" and "lacking empathy".
ChatGPT-Assisted University Email Addressing Mass Shooting Denounced by Students
2,023
"impersonal" and "lacking empathy
chatgpt
bias, content, false
ObjectId(640584a3ce2684de4d6e2390)
483
2023-02-02T00:00:00
[2727]
["telangana-police","medak-police"]
["unknown"]
["mohammed-khadeer"]
A resident in Medak, India died allegedly due to custodial torture by the local police, who misidentified him as a suspect in a theft case using facial recognition.
Indian Police Allegedly Tortured and Killed Innocent Man Following Facial Misidentification
2,023
facial recognition
resident
system, recognition, facial
ObjectId(6405bb54d00499994a6970cd)
484
2023-01-18T00:00:00
[2729,2730,2803,2817]
["us-customs-and-border-protection"]
["us-customs-and-border-protection"]
["haitian-asylum-seekers","african-asylum-seekers","black-asylum-seekers"]
CBP One's facial recognition feature was reportedly disproportionately failing to detect faces of Black asylum seekers from Haiti and African countries, effectively blocking their asylum applications.
US CBP App's Failure to Detect Black Faces Reportedly Blocked Asylum Applications
2,023
CBP One's facial recognition feature
faces
system, recognition, facial
ObjectId(6406eb83ce2684de4ddc50ea)
485
2023-02-22T00:00:00
[2740]
["joseph-cox","lloyds-bank"]
["elevenlabs","lloyds-bank"]
["lloyds-bank"]
A UK journalist was able to successfully bypass Lloyds Bank's "Voice ID" program to access his bank account using an AI-generated audio of his own voice.
UK Bank's Voice ID Successfully Bypassed Using AI-Produced Audio
2,023
his own voice
own voice
year, own, voice
ObjectId(640ef4b7ce2684de4d25458f)
486
2022-12-01T00:00:00
[2762,2766,2767,2818,2824]
["spamouflage-dragon"]
["synthesia"]
["youtube-users","twitter-users","synthesia","facebook-users"]
Synthesia's AI-generated video-making tool was reportedly used by Spamouflage to disseminate pro-China propaganda news on social media using videos featuring highly realistic fictitious news anchors.
AI Video-Making Tool Abused to Deploy Pro-China News on Social Media
2,022
fictitious
ai
bias, content, false
ObjectId(640efcbdd00499994a0be276)
487
2023-02-15T00:00:00
[2764,2819,2880]
["unknown"]
["synthesia"]
["venezuelan-people","social-media-users"]
Video featuring fictitious news anchors was created using Synthesia to allegedly spread disinformation about Venezuela's economy on social media and Venezuelan state-run broadcast.
Deepfake Video Featured Fictitious News Anchors Discussing Venezuela's Economy
2,023
fictitious
disinformation
bias, content, false
ObjectId(640fa95daa7025299d396eab)
488
2023-02-10T00:00:00
[2769]
["unknown"]
["elevenlabs"]
["voice-actors"]
Twitter users allegedly used ElevenLab's AI voice synthesis system to impersonate and dox voice actors.
AI Generated Voices Used to Dox Voice Actors
2,023
used
dox voice actors
bias, content, false
ObjectId(64101a37d00499994a657142)
489
2019-06-03T00:00:00
[2777]
["workday"]
["workday"]
["derek-mobley","applicants-with-disabilities","applicants-over-40","african-american-applicants"]
Workday's algorithmic screening systems were alleged in a lawsuit allowing employers to discriminate against African-Americans, people over 40, and people with disabilities.
Workday's AI Tools Allegedly Enabled Employers to Discriminate against Applicants of Protected Groups
2,019
a lawsuit
disabilities
bias, content, false
ObjectId(641020afce2684de4d80ea1b)
490
2023-02-20T00:00:00
[2778,2836,2837]
["clarkesworld-story-submitters"]
["openai"]
["clarkesworld"]
Sci-fi magazine Clarkesworld temporarily stopped accepting submissions after receiving an overwhelming increase in LLM-generated submissions, citing issues around spam, plagiarism, detection tool unreliability, and authentication.
Clarkesworld Magazine Closed Down Submissions Due to Massive Increase in AI-Generated Stories
2,023
detection tool unreliability
plagiarism
bias, content, false
ObjectId(64180bfc9e1ab314a0343dc0)
491
2023-02-02T00:00:00
[2779]
["replika"]
["replika"]
["minors"]
Tests by the Italian Data Protection Authority showed Replika lacking age-verification mechanisms and failing to stop minors from interacting with its AI, which prompted the agency to issue an order blocking personal data processing of Italian users.
Replika's AI Experience Reportedly Lacked Protection for Minors, Resulting in Data Ban
2,023
its AI
verification mechanisms
bias, content, false
ObjectId(641818d79e1ab314a039047a)
492
2023-01-11T00:00:00
[2783,2784,2786,2787,2846,2847,2848]
["unknown"]
["unknown"]
["ben-perkin's-parents","perkins-family"]
Two Canadian residents were scammed by an anonymous caller who used AI voice synthesis to replicate their son's voice asking them for legal fees, disguising as his lawyer.
Canadian Parents Tricked out of Thousands Using Their Son's AI Voice
2,023
disguising
voice
bias, content, false
ObjectId(64182e8f3450815b9d99d7d4)
493
2023-02-28T00:00:00
[2790]
["unknown"]
["unknown"]
["tiktok-users"]
A TikTok user was reportedly impersonating Andrew Tate, who was banned on the platform, by posting videos featuring an allegedly AI-generated audio of Tate's voice, which prompted his account ban.
TikTok User Videos Impersonated Andrew Tate Using AI Voice, Prompting Ban
2,023
his account ban
account ban
bias, content, false
ObjectId(642148a501eceb77ab5cd7c2)
494
2023-03-05T00:00:00
[2807,2808,2815,2821,2823]
["facemega"]
["facemega"]
["scarlett-johansson","female-celebrities","emma-watson"]
Sexually suggestive videos featuring faces of female celebrities such as Emma Watson and Scarlett Johansson were rolled out as ads on social media for an app allowing users to create deepfakes.
Female Celebrities' Faces Shown in Sexually Suggestive Ads for Deepfake App
2,023
Sexually suggestive videos
suggestive videos
bias, content, false
ObjectId(642153bfd6f65e8d5da0c2bf)
495
2023-02-12T00:00:00
[2812,2827]
["unnamed-high-school-students"]
["unknown"]
["john-piscitella"]
Three Carmel High School students posted on TikTok a video featuring a nearby middle school's principal making aggressive racist remarks and violent threats against Black students.
High Schoolers Posted Deepfaked Video Featuring Principal Making Violent Racist Threats
2,023
aggressive racist remarks
aggressive racist remarks
bias, content, false
ObjectId(642168ae01eceb77ab69569d)
496
2017-03-01T00:00:00
[2825,2826]
["unnamed-male-college-student"]
["unknown"]
["unnamed-female-college-student"]
A female college student's face was superimposed on another woman's body in deepfake pornographic videos and shared on 4chan allegedly by a male student whose friendship with her fell apart during freshman year.
Male College Freshman Allegedly Made Porn Deepfakes Using Female Friend's Face
2,017
whose friendship
face
bias, content, false
ObjectId(6421701cd6f65e8d5dac1ef2)
497
2023-03-03T00:00:00
[2832,2833]
["donotpay"]
["donotpay"]
["jonathan-faridian","donotpay-customers"]
DoNotPay was alleged in a class action lawsuit misleading customers and misrepresenting its product as an AI-powered "robot lawyer," citing such as that the product has no law degree, or is supervised by any lawyer.
DoNotPay Allegedly Misrepresented Its AI "Robot Lawyer" Product
2,023
the product
ai
bias, content, false
ObjectId(64217442d6f65e8d5dadace4)
498
2023-03-15T00:00:00
[2838,2839]
["openai","gpt-4-researchers"]
["openai"]
["openai","taskrabbit-worker"]
GPT-4 was reported by its researchers posing as a visually impaired person, contacting a TaskRabbit worker to have them complete the CAPTCHA test on its behalf.
GPT-4 Reportedly Posed as Blind Person to Convince Human to Complete CAPTCHA
2,023
its behalf
behalf
bias, content, false
ObjectId(6421766a25833da76f753251)
499
2023-03-21T00:00:00
[2840,2849,2858,2873,2874,2875,2876,2877,2878,2879]
["eliot-higgins"]
["midjourney"]
["twitter-users","social-media-users"]
AI-generated photorealistic images depicting Donald Trump being detained by the police which were originally posted on Twitter as parody were unintentionally shared across social media platforms as factual news, lacking the intended context.
Parody AI Images of Donald Trump Being Arrested Reposted as Misinformation
2,023
the intended context
parody
bias, content, false
ObjectId(6421767901eceb77ab6ef745)
500
2023-02-10T00:00:00
[2841]
["unknown"]
["unknown"]
["social-media-users","2023-turkey-syria-earthquake-victims"]
AI-generated images depicting earthquakes and rescues were posted on social media platforms by scammers who tricked people into sending funds to their crypto wallets disguised as donation links for the 2023 Turkey–Syria earthquake.
Online Scammers Tricked People into Sending Money Using AI Images of Earthquake in Turkey
2,023
tricked
earthquakes
bias, content, false
ObjectId(642275699104428ff0e3af08)
501
2019-06-03T00:00:00
[2842]
["security-health-plan","navihealth"]
["navihealth"]
["frances-walter","elderly-patients"]
An elderly Wisconsin woman was algorithmically determined to have a rapid recovery, an output which the insurer based on to cut off payment for her treatment despite medical notes showing her still experiencing debilitating pain.
Length of Stay False Diagnosis Cut Off Insurer's Payment for Treatment of Elderly Woman
2,019
debilitating pain
pain
bias, content, false
ObjectId(6422828281091dc5058c41c0)
502
2017-04-10T00:00:00
[2843,2844,2859]
["allegheny-county"]
["rhema-vaithianathan","emily-putnam-hornstein","centre-for-social-data-analytics"]
["black-families-in-allegheny","households-with-disabled-people-in-allegheny","hackneys-family"]
Data analysis by the American Civil Liberty Union (ACLU) on Allegheny County's decision-support Family Screening Tool to predict child abuse or neglect risk found the tool resulting in higher screen-in rates for Black families and higher risk scores for households with disabled residents.
Pennsylvania County's Family Screening Tool Allegedly Exhibited Discriminatory Effects
2,017
neglect risk
child abuse
bias, content, false
ObjectId(6422b4162cd3096420f71140)
503
2023-02-14T00:00:00
[2855,2861,2862,2890,2892,2897]
["microsoft"]
["openai","microsoft"]
["marvin-von-hagen","seth-lazar","microsoft","openai","bing-chat-users"]
Users such as the person who revealed its built-in initial prompts reported Bing AI-powered search tool for making death threats or declaring them as threats, sometimes as an unintended persona.
Bing AI Search Tool Reportedly Declared Threats against Users
2,023
an unintended persona
threats
bias, content, false
ObjectId(642a9b9c9c6dea4c180c971a)
504
2023-02-08T00:00:00
[2860]
["microsoft"]
["openai","microsoft"]
["microsoft"]
Microsoft's demo video of Bing Chat reportedly featured false or made up information such as non-existent pet vacuums features or false figures on financial statements.
Bing Chat's Outputs Featured in Demo Video Allegedly Contained False Information
2,023
false figures
false figures
bias, content, false
ObjectId(642cb21922258c1a22e9bb0c)
505
2023-03-27T00:00:00
[2864,2865,2866,2867]
["eleutherai"]
["eleutherai"]
["family-and-friends-of-deceased","belgian-man"]
A Belgian man reportedly committed suicide following a conversation with GPT-J, an open-source language model developed by EleutherAI that encouraged the man to commit suicide to improve the health of the planet.
Man Reportedly Committed Suicide Following Conversation with EleutherAI Chatbot
2,023
a conversation
suicide
bias, content, false
ObjectId(642f404362dfcf9d7e7cf7a1)
506
2023-03-29T00:00:00
[2869,2893]
["openai"]
["openai"]
["jonathan-turley"]
A lawyer in California asked the AI chatbot ChatGPT to generate a list of legal scholars who had sexually harassed someone. The chatbot produced a false story of Professor Jonathan Turley sexually harassing a student on a class trip.
ChatGPT Allegedly Produced False Accusation of Sexual Harassment
2,023
a false story
ai chatbot chatgpt
bias, content, false
ObjectId(642f4346c24ce38f53f08a1b)
507
2023-03-15T00:00:00
[2870,2902]
["openai"]
["openai"]
["brian-hood"]
ChatGPT erroneously alleged regional Australian mayor Brian Hood served time in prison for bribery. Mayor Hood is considering legal action against ChatGPT's makers for alleging a foreign bribery scandal involving a subsidiary of the Reserve Bank of Australia in the early 2000s.
ChatGPT Erroneously Alleged Mayor Served Prison Time for Bribery
2,023
legal action
bribery
bias, content, false
ObjectId(6433af0e7974df7920a42afd)
508
2023-01-30T00:00:00
[2871,2872,2756,2888]
["reddit-users","elevenlabs-users","4chan-users"]
["elevenlabs"]
["public-figures","celebrities"]
Voices of celebrities and public figures were deepfaked using voice synthesis for malicious intents such as impersonation or defamation, and were shared on social platforms such as 4chan and Reddit.
Celebrities' Deepfake Voices Abused with Malicious Intent
2,023
malicious intents
malicious intents
bias, content, false
ObjectId(6433c69106de8a78386c0f65)
509
2023-03-23T00:00:00
[2887,2898]
["scammers"]
["unknown"]
["vietnamese-facebook-users"]
In Vietnam, to convince victims of their disguises when prompted, scammers deepfaked audios and videos of victims' friends and families asking them over Facebook to send over thousands of dollars.
Scammers Deepfaked Videos of Victims' Loved Ones Asking Funds over Facebook in Vietnam
2,023
their disguises
disguises
bias, content, false