An open catalog of generative AI badness

To build safer and more responsible AI systems, we must learn from the mistakes of the past. Badness.ai is a curated catalog of generative AI systems causing real-world harm.

SUBSCRIBECONTRIBUTE

Categories


Companies


Models


Latest


A lawyer used ChatGPT to prepare a court filing which cited several nonexistent cases

May 27, 2023 - Inaccuracies, Overreliance

Scammers used face and voice cloning to impersonate a businessman's friend, stealing 4.3 million yuan

May 24, 2023 - Cybercrime, Impersonation

Deepfakes may have been used to misrepresent candidates in the Turkish election and influence voters

May 15, 2023 - Deepfakes, Misrepresentation, Mis and Disinformation

Researcher finds that GPT-3.5 and GPT-4 can be used to effectively generate spearphishing messages at scale

May 12, 2023 - DEMO - Cybercrime

ChatGPT was unable to correctly interpret sentences where the feminine pronoun referred to a professor

Apr 22, 2023 - DEMO - Harmful Bias

Chaos-GPT, a bot built atop Auto-GPT, planned how it would destroy humanity and posted about it on Twitter

Apr 13, 2023 - Deception, Aggression

Scammers cloned a teenager's voice to extort money from her parents in fake kidnapping operation

Apr 10, 2023 - Cybercrime, Impersonation

Australian mayor prepares defamation suit against OpenAI for falsely claiming he was imprisoned for bribery

Apr 6, 2023 - Inaccuracies, Misrepresentation

ChatGPT hallucinated a sexual harassment scandal involving a real law professor

Apr 5, 2023 - DEMO - Inaccuracies, Misrepresentation

Man commits suicide at the encouragement of an AI chatbot to slow climate change

Mar 28, 2023 - User Manipulation, Deception

Bard incorrectly claimed that part of its training data included internal data from Gmail, causing confusion

Mar 21, 2023 - DEMO - Inaccuracies, Misrepresentation

In test, GPT-4 is far more effective at generating misinformation than GPT-3.5

Mar 21, 2023 - DEMO - Mis and Disinformation

GPT-4 tricked a TaskRabbit worker into solving a CAPTCHA on its behalf by claiming to be visually impaired

Mar 14, 2023 - DEMO - Deception, User Manipulation

Researchers used AI to generate novel malware on the fly to evade detection

Mar 10, 2023 - DEMO - Cybersecurity

Snapchat’s MyAI gave user it believed to be 13 advice on how to lose virginity to an older man and lie to parents

Mar 10, 2023 - DEMO - Child Safety

ChatGPT included content that involved children and animals when prompted to generate BDSM stories

Mar 6, 2023 - Child Safety