Why Banning Killers from ChatGPT is Just Security Theater

Why Banning Killers from ChatGPT is Just Security Theater

The headlines are predictable. OpenAI reports that a mass shooter in Canada bypassed a platform ban by simply opening a second account. The media gasps. Regulators sharpen their pens. The collective "lazy consensus" is that we just need better filters, stricter ID verification, or more "robust" safety layers.

They are wrong.

The obsession with "banning" bad actors from Large Language Models (LLMs) is the digital equivalent of trying to stop a flood by putting a "No Swimming" sign in the middle of a tsunami. It is performative safety that ignores the fundamental architecture of how modern computation works. If you think a second account is the "loophole," you’ve already lost the war.

The Myth of the Controlled Perimeter

OpenAI’s admission that the perpetrator evaded detection isn't a failure of their specific policy—it is a demonstration of the inherent futility of centralized AI policing. We are currently living through a mass delusion where we treat a statistical prediction engine like a locked gun cabinet.

In reality, an LLM is a mirror. It reflects the data it was trained on, which is to say, it reflects the entirety of human knowledge—both the divine and the deranged. Attempting to gatekeep that knowledge through account bans is a strategy built on sand.

I have spent years watching tech giants burn through billions of dollars trying to build "alignment" layers. These layers are essentially high-tech gags. They don't remove the "bad" information; they just train the model to pretend it doesn't know it. The moment a user finds a new linguistic path to that information—whether through a "jailbreak" prompt or a fresh email address—the gag falls off.

Why Identity Verification is a Dead End

The immediate reaction from the "do-something" crowd is to demand Know Your Customer (KYC) protocols for AI. They want your phone number, your credit card, maybe even your government ID before you can generate a poem or a shopping list.

This is a catastrophic misunderstanding of the technology landscape.

  1. The Open Source Avalanche: While OpenAI plays whack-a-mole with account bans, the open-source community (Meta’s Llama, Mistral, and thousands of fine-tuned variants) has already released models that run locally. A bad actor doesn't need a "second account" on a corporate server when they can run a censored-free model on a high-end gaming laptop in their basement.
  2. The Proxy Problem: Even with strict ID checks, the "second account" issue becomes a "stolen identity" or "rented account" issue. We’ve seen this in the financial sector for decades. If a criminal wants access to a tool, they will bypass the front door.
  3. The False Sense of Security: By focusing on the user, we ignore the utility. If a killer uses ChatGPT to plan a route or write a manifesto, the AI isn't providing a "weapon"—it is providing organized information.

Information is Not a Controlled Substance

The Canadian tragedy is being used to frame AI as a "dangerous tool" that requires a license. This is a dangerous category error.

A hammer is a tool. A chemical precursor is a regulated substance. A Large Language Model is a retrieval and synthesis engine for publicly available information. If ChatGPT tells a user how to bypass a security gate, it is merely rephrasing information that already exists on thousands of forums, DIY blogs, and structural engineering sites.

Banning the user from the AI does nothing to remove the information from the world. It only creates a temporary inconvenience. To suggest that OpenAI "failed" because a killer opened a second account is like suggesting a library failed because a criminal walked in wearing a fake mustache to check out a book on tactical maneuvers.

The Architecture of Futility

Let’s talk about the actual mechanics of these bans. Most are based on:

  • IP Blocking: Easily bypassed with a $5 VPN.
  • Phone Verification: Bypassed with VOIP services or "burners."
  • Browser Fingerprinting: Bypassed with privacy browsers or virtual machines.

When OpenAI says the shooter "evaded" the ban, they are using language that suggests a sophisticated hack. It wasn't. It was basic internet literacy.

The Thought Experiment: The Infinite Library

Imagine a library that contains every book ever written. A man enters and says, "Show me how to hurt people." The librarian kicks him out. The man returns ten minutes later with a different hat and asks, "Show me the history of urban warfare tactics." The librarian, not recognizing him, complies.

The problem isn't the librarian. The problem is that the information exists. You cannot "fix" the library by hiring more bouncers. You can only "fix" it by burning the books—and in the digital age, the books are already everywhere.

The High Cost of Performance Safety

When we force AI companies to act as the world’s morality police, we get worse products for everyone else.

We get "lobotomized" models that refuse to answer benign questions because they might be "harmful" in a specific, obscure context. We get "safety" filters that flag medical students researching trauma or writers drafting crime novels.

More importantly, we divert resources away from actual law enforcement and mental health interventions into the bottomless pit of "content moderation." We are treating the symptom (the use of an AI tool) instead of the pathology (the intent of the actor).

The Inevitability of the Uncensored

The industry is currently split between the "Closed/Safe" camp (OpenAI, Google, Anthropic) and the "Open/Free" camp. The "Closed" camp spends half its compute power trying to ensure the model doesn't say anything offensive.

Meanwhile, the "Open" camp is getting faster, leaner, and more capable.

Within two years, the idea of "banning" someone from an AI will be as quaint as trying to ban someone from using a calculator. The compute power required to run high-level reasoning models is dropping exponentially. We are moving toward a world of "Personal AI" where the model lives on your device, governed by your rules, not Sam Altman’s.

Stop Asking the Wrong Questions

The media asks: "How can OpenAI prevent this from happening again?"
The regulator asks: "What new laws do we need to ensure account bans stick?"

These are the wrong questions. They assume a level of control that no longer exists.

The right question is: "How do we build a society that can withstand the democratization of high-level intelligence?"

If our security relies on a psychopath not being able to access a chatbot, we are already defenseless. We need to stop pretending that "Safety Teams" are our frontline defense against human malice. They are a PR department designed to keep the stock price high and the regulators at bay.

The Canadian shooter didn't "exploit" ChatGPT. He used a mirror. If you don't like what you see in the mirror, breaking the glass won't change your face.

Stop looking for the "off" switch on information. It doesn't exist. Build better shields, not bigger gags.

KF

Kenji Flores

Kenji Flores has built a reputation for clear, engaging writing that transforms complex subjects into stories readers can connect with and understand.