A critical approach to AI
By Roland Carn
AI isn’t like a new kettle or a more efficient engine. It doesn’t sit neatly in one corner of life, doing one job. It slips into the gaps between things—between people and services, between questions and answers, between decisions and consequences.
And because of that, we can’t afford to talk about it only in technical terms, or leave it to the experts. We need conversations that are wider, deeper, and slower. We need, in short, a public reckoning—not driven by fear or hype, but by thoughtfulness.
1. Learning to Read the Tool
The first step is to build a kind of literacy—an ability to read, question, and understand what AI really is.
This isn’t just about knowing what a neural network does. It’s about helping people understand how these systems shape their experiences. Schools, libraries, newsrooms, and neighbourhood centres can all play a part in this. So can parents at the dinner table, or friends over coffee.
We already teach children to ask, “Is this source reliable?” We now need to teach them to ask, “Is this output trustworthy?” The goal isn’t to turn everyone into a programmer—but to equip them to stay curious, cautious, and engaged.
2. Cutting Through the Jargon
One of the great barriers to good conversation is bad language—and AI, unfortunately, has plenty of it. Terms like “large language model” or “algorithmic fairness” may mean something to developers, but they can feel opaque or alienating to everyone else.
But these ideas aren’t beyond reach. They can be explained. They can be demystified. A neural network isn’t so different from the way we spot faces in clouds. Fairness in algorithms can be discussed just as we’d talk about fairness on a football pitch.
Joseph Weizenbaum, who created ELIZA, warned long ago that people might start treating machines as if they understood. His advice? Use plain language. Stay grounded. Keep asking what’s at stake.
3. Making Room for More Voices
AI systems shape society. They shape opportunities, access, freedoms—and sometimes their absence. That means society must help shape AI in return.
This can’t just be a conversation among tech firms and policymakers. We need spaces—town halls, citizen juries, online forums—where people from all walks of life can weigh in. Especially those who are often left out: people from marginalised communities, those on the receiving end of automated decisions, those whose lives are most at risk of being shaped without their input.
Participation isn’t a courtesy. It’s a necessity.
4. Watching the Watchers
No tool should be beyond scrutiny. And AI, given its scale and complexity, demands independent oversight—by researchers, journalists, civil society groups, and ordinary users who ask inconvenient questions.
We need transparency reports that aren’t buried in legalese. Audits that are real, not performative. Protections for whistleblowers who raise the alarm. A culture that doesn’t just celebrate innovation, but also welcomes reflection and correction.
Trust, after all, isn’t something you engineer. It’s something you earn—and maintain.
5. Honouring Different Ways of Knowing
AI is often framed as neutral, but it never is. It reflects the values of its makers, the assumptions in its training data, and the priorities of those who fund it. And these don’t always align across cultures, or even across streets.
A hiring algorithm built in one country may fail to account for the credentials, customs, or life rhythms of another. A mental health chatbot may miss the nuances of grief, joy, or politeness in a different language or faith.
We need to make room for these differences—not just technically, but respectfully. Not every solution will travel well. And not every question has a single answer.
6. Asking the Harder Questions
Finally, we must learn to ask not just “Can this be done?” but “Should it?” And not just “What’s the benefit?” but “Who benefits?”
It’s tempting to treat AI as a force of nature—inevitable, unstoppable, outside our control. But it’s not. It’s a human endeavour. Which means we get to shape it.
That starts with conversation. With humility. With curiosity. With the courage to slow down, listen harder, and leave space for doubt.
AI doesn’t just belong to scientists or CEOs. It belongs to all of us. And if we want it to reflect the best of our shared humanity, we’ll need to meet it—not with blind trust or blanket fear—but with informed, ongoing, deeply human discussion.