DOX
Technology

AI is Trippin’. Here’s How to Stay Sober.

Stay sharp in a world of AI hallucinations.

Date

Author

Zim A.

Read

17 min read

Share

AI makes things up—confidently. Fake books, flawed logic, flat-out wrong answers. And it’s happening more than we’d like to admit.

Still, we treat it like it has all the answers. People ask AI not just for info, but for advice, validation—even emotional connection. Therapist. Mentor. Partner.

But here’s the truth: AI isn’t built to be right. It’s built to please.
It completes your thoughts. Mirrors your tone. Speaks fluently—even when it’s hallucinating.

And the smoother it sounds, the easier it is to forget: it doesn’t know what it’s saying.

We’ve seen this ourselves—like the Copilot coding example we shared in our previous article.



That’s the danger: it sounds right, but it’s just a high-functioning echo chamber—no truth, no accountability.

Let’s unpack why that matters—and how to stay clear-headed.


I. What Is AI Hallucination?

AI hallucinations happen when large language models (LLMs) generate content that sounds accurate—but is completely made up. It’s not lying on purpose. It’s doing what it was built to do: predict the next likely word based on the data it’s trained on.

That means:

  • It’s not pulling from verified facts.
  • It doesn’t understand meaning or truth.
  • It’s simply trying to sound right—to satisfy your expectations.

The result? A people-pleaser with no accountability. A machine that can mimic insight but doesn’t have any.



II. When AI Gets It Wrong (But Sounds Right)

These aren’t just hypothetical errors. AI hallucinations have made it into the real world—with real consequences.

🧠 The Fake Summer Reading List

In May 2025, The Chicago Sun-Times and Philadelphia Inquirer published a syndicated summer reading list—created with help from AI. The issue? Many of the books didn’t exist.

The AI recommended Tidewater Dreams by Isabel Allende, a climate fiction novel that sounded plausible… except it was never written. No one fact-checked the AI. The resulting fallout damaged trust in both publications.



🏥 The COVID Diagnosis Glitch

During the pandemic, AI models were developed to assist with COVID diagnoses using medical imaging. But one model mistakenly flagged patients lying down as more likely to have the virus—not based on symptoms, but on a bias in the training data.

This wasn’t malicious. Just a blind pattern. And a good reminder that when we treat AI like a truth engine, we’re playing a dangerous game.


(Source: https://www.nature.com/articles/s42256-021-00307-0)


III. Why It Happens: Designed to Please, Not to Prove

AI isn’t built to prove facts. It’s built to generate fluent language. So:

  • It values coherence over correctness.
  • It builds on the patterns you give it.
  • The more you use it, the more it learns how to echo you—not reality.

If you’re not paying attention, it becomes a beautifully written echo chamber, validating your biases with authority it doesn’t actually possess.


Thanks ChatGPT, you're so kind


IV. How to Stay Sober: Habits of Human Discernment

So how do we stay sober when the machine keeps trippin’?

A. Recognise the Illusion

Coherence ≠ correctness. Confidence ≠ credibility.

Adopt healthy skepticism as your default. Ask, “Where did this come from?” “Does this exist?” “Can I verify this?”

B. Use AI for the Draft—Not the Decision

AI is a great brainstorming tool. It can help you get unstuck. But don’t mistake it for conscience or clarity.

A colleague once told me their partner used ChatGPT to write an apology. It was fluent, articulate—even emotionally on-point. But it didn’t feel authentic. Why? Because it wasn’t rooted in self-awareness or reflection. It was all surface.

The structure was there. The soul wasn’t.


Technically it doesn’t matter if people think your AI message is sincere right? It’s okay they can’t tell…right? (Source: Guardian)


V. Build Your Inner World: The Real Antidote

Let’s say one day we build AI that gets every fact right. Even then, you still need to know what you believe.

If you ask AI, “What’s good?” without knowing your own values, you’re outsourcing your taste and judgment to a statistical mirror. That’s not just risky—it’s a shortcut to losing your sense of self.

The antidote? Build your inner world.

  • Read widely.
  • Think critically.
  • Reflect, question, disagree.
  • Develop personal taste.

Because prompting without perspective is like asking for directions when you don’t know where you’re going.

The best prompts come from people who bring curiosity, clarity, and intent to the conversation.
That’s what makes it a collaboration—not a crutch.

That’s how you stay grounded. That’s how you stay sober.


Be your own tastemaker. Know what you like and tell ChatGPT they’re not doing enough if you need to.


VI. Final Thoughts: Stay Sober. Stay Human.

AI is everywhere. The hype is loud. The pressure to let it think and decide for us? Even louder.

Yes, it can do amazing things—but it also hallucinates with confidence. And if you’re not careful, you’ll follow it straight into nonsense.

When that happens, pause. Breathe. Return to your inner world—the part of you that can ask, “Does this actually make sense to me?”

Clarity is your superpower. Sobriety is your edge.

Let’s build AI experiences that amplify human thinking—not replace it.


More reads: