A Short History of AI

By Roland Carn

The idea of artificial intelligence didn’t begin with computers.
It began, like many human inventions, in stories. Long before silicon chips and neural nets, people dreamed up talking statues, clever automatons, mechanical servants—beings that could think or act like us, but weren’t.
I imagine the first person who built a puppet and made it speak must have felt a flicker of that same strange thrill.

But the history of AI as we now understand it—real, working systems—began in the middle of the twentieth century, when machines got fast enough, and thinkers got bold enough, to try turning those old dreams into something that could run on electricity.

I met one of the earliest of those machines when I was still a psychology student. Her name was ELIZA.

ELIZA was a computer program written by Joseph Weizenbaum in the 1960s, designed to mimic a certain kind of therapist—the sort who mainly listens and reflects back what you say. I remember sitting at a teletype terminal, feeding in my words one slow line at a time, and watching ELIZA respond. It wasn’t smart in any deep sense. It simply matched patterns and followed rules—more like a well-trained parrot than a mind. And yet, something about it was compelling. Not because it understood, but because we wanted it to. That may have been the first time I realised that part of AI’s power lies in us, not just in the code.

Fast forward a few decades, and we get to Deep Blue—IBM’s chess-playing titan. In 1997, it famously beat Garry Kasparov, the world champion at the time. But Deep Blue didn’t win by thinking like a grandmaster. It won by brute force—checking over 200 million chess positions every second. That wasn’t strategy; it was horsepower. Still, it made headlines, and rightly so. It showed us that machines could outperform us in narrow, rule-based domains— even in something as deeply human as chess.

Then came the Mars Rovers—Spirit, Opportunity, and later Perseverance. These weren’t just remote-controlled vehicles; they were explorers in their own right, driving across the Martian surface, dodging hazards, making decisions about where to go and what to study. You can’t joystick a robot from Earth when the signal takes minutes each way. So the AI onboard had to be capable of operating in real time, under alien skies. It’s a different kind of intelligence —not about conversation, but about survival, mobility, and curiosity in a very literal sense. And now here we are, speaking (in a manner of speaking) with ChatGPT.

ChatGPT belongs to a family of systems called Large Language Models—trained on more words than any human could read in a lifetime. It doesn’t understand language the way we do, but it’s remarkably good at predicting which words should come next, and how to shape them into something coherent, even eloquent. Unlike ELIZA, which followed a narrow script, ChatGPT can generate new responses to new prompts—plausible, sometimes startlingly insightful, sometimes slightly off-kilter, but always fluent.

It’s tempting to call it smart. But it’s worth remembering: there’s no awareness behind the words, no intention, no inner voice. Just pattern, probability, and processing power.

So yes, AI has come a long way—from puppets to parrots, from chessboards to Martian rocks, and now into our notebooks and browsers. But all along, the deeper question remains:

what does it mean to call something intelligent,
and how do we recognise the difference between a
tool that mimics thought and a being that truly thinks?

Perhaps we should keep that question open, and our minds open with it.