AI Myths

By Roland Carn

If you spend enough time around a new tool—or a new idea—you start to hear the same misunderstandings repeated, like loose floorboards that creak in familiar places. AI is no exception. It’s in our headlines, our homes, and increasingly our conversations, yet it’s often cloaked in confusion.

Let’s take a look at a few of the more common myths, not to scold or lecture, but to clear a bit of space for better questions.

1. “AI is Conscious or Sentient”

This one comes up a lot, especially with tools like ChatGPT. The language flows smoothly. The replies seem thoughtful. You ask a question, and something answers.

But here’s the truth: it’s all illusion. There’s no mind behind the words, no inner voice, no flicker of awareness. What you're seeing is mathematics—complex pattern recognition, not understanding.

I’ve worked with machines long enough to know that functionality can look a lot like intelligence. But a lathe that cuts to a thousandth of a millimetre doesn’t know it’s making furniture. And ChatGPT, clever as it is, doesn’t know it's having a conversation.

It’s tempting to forget that. But forgetting it can lead us into dangerous territory.

2. “AI Will Replace Humans”

Another old fear in new clothing. And yes, jobs are changing. Tasks once done by people— sorting forms, writing summaries, translating menus—are now done, in part, by machines.

But replacement isn’t the whole story. If history teaches us anything, it’s that automation changes the shape of work more often than it erases it. New roles emerge. Old ones adapt. The wrench doesn't replace the mechanic; it becomes part of the trade.

The real challenge lies in transition. Retraining, rethinking, and revaluing work that can’t be easily automated—work involving empathy, judgement, and human presence.

3. “All AI Is Like ChatGPT”

Since ChatGPT became part of everyday conversation, it’s easy to think all AI talks—or writes, or makes jokes about ducks. But language models are only one branch on a much larger tree.

Some AIs sort images. Some power robots. Others optimise logistics, detect fraud, recommend music, or run search engines. Each one is built for a specific domain, with its own strengths and blind spots.

Thinking all AI is conversational is a bit like thinking all machines are radios. Useful metaphor, but not the whole picture.

4. “AI Is Objective and Neutral”

This one has an air of authority to it—machines aren’t emotional, so surely they’re more fair? Unfortunately, that doesn’t hold up.

AI learns from us. And we, as a species, are not always neutral. Data carries the weight of history, culture, assumption, and omission. If the data is biased, the system can amplify those biases—quietly, efficiently, and at scale.

I’ve seen how easy it is to assume that precision equals fairness. But fairness requires something deeper: reflection, correction, and oversight. Machines can’t do that on their own.

5. “AI Knows Everything”

Here’s a common trap: the machine speaks with confidence, so we assume it’s right. But confidence isn’t knowledge.

Most AI systems—including the one you’re reading now—don’t browse the internet in real time. They’re trained on data up to a certain point, and they don’t know what’s changed since. They don’t sense the world, remember your last conversation, or cross-check what they say against lived experience.

They can sound wise, but they can also be wrong—and sometimes spectacularly so. That’s why human judgement still matters. The tool offers a draft, not a verdict.

6. “AI Is Inherently Dangerous”

And finally, the myth of the rogue machine—the evil robot, the self-aware overlord, the cold logic that turns against its makers.

Now, I’m not dismissing long-term risks. They deserve serious thought. But most of the real dangers today come not from AI itself, but from how we use it—or fail to regulate it. Surveillance, disinformation, discrimination—these aren’t science fiction. They’re already here, embedded in systems designed by fallible humans.

Fear has its place, but so does discernment. If we treat AI as a monster, we might miss the real problems hiding in plain sight: bad design, weak oversight, and misplaced trust.

AI isn’t magic. And it isn’t malevolent. It’s a tool—astonishing in its capability, flawed in its foundations, and shaped by the choices we make.

The myths are understandable. But the reality, in many ways, is more interesting—and more within our power to influence.