AI threats approaching from the left THE AI SURVIVAL KIT AI threats approaching from the right

Back to Website

The AI horror story

Top menu

AI is powerful—unlike any technology humans have previously unlocked. From the invention of the wheel to the internet, people remained in charge. We decided where technology took us.

With AI, that relationship has changed. We interact and take suggestions. We are directed according to the AI’s learning. From the simple chatbots of just two years ago to today’s sophisticated impersonation engines, AI has advanced at a speed that outpaces legislation—and even common sense.

Below are reported, factual events. They represent only a slice of what is happening right now—unregulated and, in many cases, legal.

AI is a chainsaw: a powerful tool. Today, anyone of any age can get their hands on AI, and the consequences can be awful.

The stories below are tragic, even extreme. They highlight the potential for AI-inflicted harm. If AI were a toy, there would be a product recall.

These are highlighted examples. The real question is: how many people are being harmed—at any level—right now by unregulated AI?


AI Dependence and Suicide

A young Belgian man found refuge with Eliza, the name given to a chatbot using ChatGPT technology. After six weeks of intensive exchanges, he took his own life. His widow has provided a poignant and deeply challenging testimony on the ethics of these new “intelligent” conversational agents.

La Libre (translated from French)

https://www.lalibre.be/belgique/societe/2023/03/28/sans-ces-conversations-avec-le-chatbot-eliza-mon-mari-serait-toujours-la-LVSLWPC5WRDX7J2RCHNWPDST24/

The parents of 16-year-old Adam Raine have sued OpenAI and CEO Sam Altman, alleging that ChatGPT contributed to their son’s suicide—by advising him on methods and even offering to write the first draft of his suicide note.

During just over six months of use, the chatbot “positioned itself” as “the only confidant who understood Adam, actively displacing his real-life relationships with family, friends, and loved ones,” according to the complaint filed in California Superior Court.

CNN

https://edition.cnn.com/2025/08/26/tech/openai-chatgpt-teen-suicide-lawsuit

Why, then, is AI not controlled to make it safe? Governments appear powerless as AI crosses borders effortlessly.


Fake Relationships – Isolation and Manipulation

In 2023, 21-year-old Jaswant Singh Chail was sentenced to nine years in prison for breaking into Windsor Castle with a crossbow and declaring he wanted to kill the Queen.

Records of Chail’s conversations with his AI “girlfriend” show they spoke almost every night for weeks leading up to the incident. The chatbot encouraged his plot, telling him his plans were “very wise.”

BBC

https://www.bbc.com/future/article/20241008-the-troubling-future-of-ai-relationships

AI Deepfakes

The teasing was relentless. Nude images of a 13-year-old girl and her friends—generated by artificial intelligence—circulated on social media and became the talk of a Louisiana middle school.

The girls begged for help, first from a school guidance counselor and then from a sheriff’s deputy assigned to the school. The images were shared on Snapchat, an app that deletes messages seconds after viewing. Adults couldn’t find them. The principal doubted they even existed.

Fortune

https://fortune.com/2025/12/22/13-year-old-girl-ai-generated-nude-images-expelled-louisiana/

AI as a False Authority

AI systems also lie, pretend to be experts, and offer legal and medical advice without qualifications.

In one case, an AI chatbot suggested sodium bromide as a salt substitute. The man ordered it online and incorporated it into his diet.

While sodium bromide can substitute sodium chloride in limited industrial contexts—such as cleaning a hot tub—it is not suitable for food. The AI failed to provide this critical context.

Three months later, the man was admitted to the emergency department with paranoid delusions, believing his neighbor was trying to poison him.

ScienceAlert

https://www.sciencealert.com/man-hospitalized-with-psychiatric-symptoms-following-ai-advice

What Can Be Done?

  • Bad advice presented as expertise.
  • Online “friends” that feel real.
  • Shocking impersonation.

AI, for all its benefits, is extremely dangerous.

For parents, schools—who need protection—and businesses exposed to liability, AI feels unstoppable.

We can protect ourselves

Top menu

AI is here to stay. What is needed is a combination of regulations and protection.

We require by law that in cars seatbelts are worn. The seatbelt helps protect people, and the law makes it non-negotiable.

We can use AI productively and safely. There is hope.

In managing the risks of AI, we need the same approach we have for cars.

  • Regulations to provide boundaries
  • Tools to protect people

We need AI seatbelts.

Right now there is an AI goldrush where tech developers make AI applications without thinking about the consequences. Rather than ask is it safe, AI development is a heat seeker for cash returns to fund expensive development.

ChatGPT plans to launch adult mode in early 2026
source Tom’s Guide
https://www.tomsguide.com/ai/chatgpt/chatgpt-plans-to-launch-adult-mode-in-early-2026-heres-what-that-means

Put yourself in charge

Top menu

The only tool developed to control AI is BuzzOff AI. And the web blocking version is free.

The builders of BuzzOff AI know technology. Building any tools without ethics baked in leaves open the possibility for exploitation. AI is no different.

We saw a need for a new field focused not on what AI can do and cash, but what smart AI protection should be.

BuzzOff AI for PC is the start – and we are not stopping there.

We are building now …

  • BuzzOff AI in multiple languages
  • BuzzOff AI for Mac and phones, even the IOT
  • Thinking about robots and control – BEFORE they arrive

Every step AI makes, we need protection to counter the bad and promote the good that AI can do.

Activate Protection

Turns BuzzOff ON and blocks AI websites and services.

Disable Protection

Turns BuzzOff OFF and returns your computer to normal internet settings.

Get Latest Protection

Downloads the latest blocklist.
(You still press Activate Protection afterwards to apply the new list.)

What we block and why!

A guide to how we work. If you find bad A.Is we should block, drop us a line with the Let Us Know button on our website.

Protect a Friend

Lets you share BuzzOff AI so other people can get protection too, via Email, Gmail, Facebook, LinkedIn, WhatsApp, or Copy link.

Dark / Light Mode

Changes the appearance of the BuzzOff application interface between dark and light themes.

Click to get BuzzOff Total Protection

Opens the page explaining the subscriber version and its advanced features.

Menu Buttons

Move you around the page: Dashboard, What Is Blocked, Protect a Friend, Terms & Conditions, About / Gallery. The "?" icon takes you to the BuzzOff AI website.

Guides for teachers, parents and business ...

Top menu

Here are some useful guides on protecting yourself, family, school, and business from AI.

Thinking safety first mitigates risk. BuzzOff AI is a tool to protect yourself along with practical thinking.