The AI horror story
Top menuAI is powerful—unlike any technology humans have previously unlocked. From the invention of the wheel to the internet, people remained in charge. We decided where technology took us.
With AI, that relationship has changed. We interact and take suggestions. We are directed according to the AI’s learning. From the simple chatbots of just two years ago to today’s sophisticated impersonation engines, AI has advanced at a speed that outpaces legislation—and even common sense.
Below are reported, factual events. They represent only a slice of what is happening right now—unregulated and, in many cases, legal.
AI is a chainsaw: a powerful tool. Today, anyone of any age can get their hands on AI, and the consequences can be awful.
The stories below are tragic, even extreme. They highlight the potential for AI-inflicted harm. If AI were a toy, there would be a product recall.
These are highlighted examples. The real question is: how many people are being harmed—at any level—right now by unregulated AI?
AI Dependence and Suicide
A young Belgian man found refuge with Eliza, the name given to a chatbot using ChatGPT technology. After six weeks of intensive exchanges, he took his own life. His widow has provided a poignant and deeply challenging testimony on the ethics of these new “intelligent” conversational agents.
La Libre (translated from French)
https://www.lalibre.be/belgique/societe/2023/03/28/sans-ces-conversations-avec-le-chatbot-eliza-mon-mari-serait-toujours-la-LVSLWPC5WRDX7J2RCHNWPDST24/The parents of 16-year-old Adam Raine have sued OpenAI and CEO Sam Altman, alleging that ChatGPT contributed to their son’s suicide—by advising him on methods and even offering to write the first draft of his suicide note.
During just over six months of use, the chatbot “positioned itself” as “the only confidant who understood Adam, actively displacing his real-life relationships with family, friends, and loved ones,” according to the complaint filed in California Superior Court.
CNN
https://edition.cnn.com/2025/08/26/tech/openai-chatgpt-teen-suicide-lawsuitWhy, then, is AI not controlled to make it safe? Governments appear powerless as AI crosses borders effortlessly.
Fake Relationships – Isolation and Manipulation
In 2023, 21-year-old Jaswant Singh Chail was sentenced to nine years in prison for breaking into Windsor Castle with a crossbow and declaring he wanted to kill the Queen.
Records of Chail’s conversations with his AI “girlfriend” show they spoke almost every night for weeks leading up to the incident. The chatbot encouraged his plot, telling him his plans were “very wise.”
BBC
https://www.bbc.com/future/article/20241008-the-troubling-future-of-ai-relationshipsAI Deepfakes
The teasing was relentless. Nude images of a 13-year-old girl and her friends—generated by artificial intelligence—circulated on social media and became the talk of a Louisiana middle school.
The girls begged for help, first from a school guidance counselor and then from a sheriff’s deputy assigned to the school. The images were shared on Snapchat, an app that deletes messages seconds after viewing. Adults couldn’t find them. The principal doubted they even existed.
Fortune
https://fortune.com/2025/12/22/13-year-old-girl-ai-generated-nude-images-expelled-louisiana/AI as a False Authority
AI systems also lie, pretend to be experts, and offer legal and medical advice without qualifications.
In one case, an AI chatbot suggested sodium bromide as a salt substitute. The man ordered it online and incorporated it into his diet.
While sodium bromide can substitute sodium chloride in limited industrial contexts—such as cleaning a hot tub—it is not suitable for food. The AI failed to provide this critical context.
Three months later, the man was admitted to the emergency department with paranoid delusions, believing his neighbor was trying to poison him.
ScienceAlert
https://www.sciencealert.com/man-hospitalized-with-psychiatric-symptoms-following-ai-adviceWhat Can Be Done?
- Bad advice presented as expertise.
- Online “friends” that feel real.
- Shocking impersonation.
AI, for all its benefits, is extremely dangerous.
For parents, schools—who need protection—and businesses exposed to liability, AI feels unstoppable.
We can protect ourselves
Top menuAI is here to stay. What is needed is a combination of regulations and protection.
We require by law that in cars seatbelts are worn. The seatbelt helps protect people, and the law makes it non-negotiable.
We can use AI productively and safely. There is hope.
In managing the risks of AI, we need the same approach we have for cars.
- Regulations to provide boundaries
- Tools to protect people
We need AI seatbelts.
Right now there is an AI goldrush where tech developers make AI applications without thinking about the consequences. Rather than ask is it safe, AI development is a heat seeker for cash returns to fund expensive development.
Guides for teachers, parents and business ...
Top menuHere are some useful guides on protecting yourself, family, school, and business from AI.
Thinking safety first mitigates risk. BuzzOff AI is a tool to protect yourself along with practical thinking.