Types of AI

By Roland Carn

It helps, sometimes, to sort things into boxes—not to limit them, but to understand their shape. Over the years, I’ve kept many kinds of boxes: toolboxes for woodworking, storage boxes for bits of code I couldn’t bear to throw away, even old biscuit tins full of spare fuses and forgotten screws. So when people talk about types of AI, I find it useful to imagine different kinds of boxes—or perhaps better, different drawers in the same workbench. Each holds something quite distinct.

The first drawer contains what’s called narrow AI—and this is where most of today’s technology lives. These are systems built to do one thing well. A spam filter that keeps junk out of your inbox. A voice assistant that sets your timer or plays your favourite song. A translator that turns French into English, more or less. Even something as grand as Deep Blue, the chess machine, belongs here. It could beat Kasparov, but hand it a shoelace and it wouldn’t know what it was looking at, let alone what to do with it. Like a specialised chisel, it’s excellent for one job—and hopeless at the rest.

Then there’s general AI, sometimes called AGI or strong AI. This would be the kind of system that could turn its hand to almost anything—reason, plan, learn, adapt—just as we humans do, often without even noticing. It would be able to switch domains, solve new problems, maybe even understand what problems are. But at present, it’s still a vision. No machine today can match the flexibility or subtlety of a human brain, let alone our ability to draw on emotion, instinct, or memory in the blink of an eye. Our minds work on a platform of messy electrochemistry—sparks and hormones and years of lived experience. AI, for now, runs on code and silicon.

And then there’s the deep drawer at the bottom—the one that’s locked, or perhaps still empty. That’s superintelligent AI, a concept that sits somewhere between serious speculation and science fiction. It refers to machines that might someday surpass us in every cognitive domain—not just faster, but smarter, more creative, more insightful, maybe even more ethical. Some see it as our greatest hope. Others, our downfall. I see it, for now, as a story still unwritten.

There’s another way to organise the drawer labels, based not on ambition but on behaviour. Some AI systems are reactive—they respond, but don’t learn. Deep Blue was like this. It had no memory, just an ability to calculate possibilities in the present moment.

Others, like self-driving cars, have limited memory. They learn from past experience, adjust to road conditions, and make decisions based on accumulated data. It’s not wisdom, but it’s more than reflex.

A more speculative kind of AI—still in the realm of research—is often called theory of mind. These systems would understand not just facts or commands, but beliefs, intentions, emotions. In other words, they would relate. But as of now, this remains theoretical. No machine knows what it feels like to be misunderstood.

And at the furthest edge, we find self-aware AI—systems that would not only perceive but possess some inner sense of identity. These are not just unbuilt; they may not even be buildable. We don’t yet know how to define consciousness, let alone replicate it.

As for ChatGPT? It fits neatly into the narrow AI category, with limited memory, depending on how it’s configured. It can write a poem, answer a question, even help brainstorm your next chapter—but it doesn’t remember your last conversation, and it certainly doesn’t know you, at least not in the way a person would. It’s like a very articulate carpenter’s clamp— responsive, useful, and strangely companionable, but without hands, heart, or history.

Still, it’s remarkable what a tool can do in the right hands.