AI: the pros and cons
By Roland Carn
The Benefits of AI
For all the anxiety that sometimes surrounds AI, I find it helpful to pause and take stock of what it actually does well. Like any tool I’ve worked with—whether a plane, a spreadsheet, or a soldering iron—its value lies not just in its power, but in how thoughtfully it’s applied. And in the right setting, AI can be remarkably helpful.
Efficiency and Speed
If you’ve ever watched a skilled craftsperson work—someone who knows their tools so well they barely seem to think about them—you’ll understand the value of efficiency. AI brings that kind of fluency to tasks that involve huge volumes of data, tight deadlines, or both.
A well-tuned AI system can scan a stack of loan applications faster than you can boil a kettle. It can comb through research papers, sort legal contracts, or flag anomalies in financial records—all without losing focus or needing a coffee break.
In hospitals, this kind of speed can quite literally save lives. A timely scan analysis, an early warning of a deteriorating condition—AI here acts like an assistant with perfect recall and infinite stamina. In transport and logistics, it shuffles routes and timetables in real time. It’s not glamorous, but it gets things where they need to be, when they need to be there.
Accuracy and Precision
AI is particularly good at spotting patterns—especially ones we might miss. Whether it’s detecting a tiny irregularity in a medical scan, or noticing a suspicious spending pattern that hints at fraud, machines trained on the right data can often outperform us in accuracy.
In my own work restoring furniture, precision made the difference between a clean repair and a ruined joint. It’s the same with AI: when applied correctly, it reduces errors, spots defects, and helps maintain consistency at scale. In translation, too, the results are steadily improving —turning once-clunky guesses into smoother, more natural phrases.
Accessibility and Inclusion
One of the quieter but more profound contributions of AI is the way it opens doors. Voice assistants let people with limited mobility control their environment. Automatic subtitles help the hearing-impaired access video content. Real-time translation tools help us bridge language barriers.
Used thoughtfully, these tools can broaden participation in education, work, and society. They don’t just make things faster or cheaper—they help make life more equal. And that, to me, is one of the most hopeful aspects of the technology.
Discovery and Innovation
If you’ve ever had to sift through thousands of pages to find one useful piece of information, you’ll appreciate this one. AI can take on the heavy lifting of data analysis, leaving the human mind free to do what it does best: imagine, connect, create.
In science, this means new insights—from protein structures to climate models to potential planets. In pharmaceuticals, it speeds up the long road from lab to trial. It’s not doing the dreaming, but it’s clearing space for those who do.
Personalisation and Experience
We all like to feel seen—and AI, in its own way, can make that happen. A learning app that adjusts to your pace. A film recommendation that fits your mood. A shopping site that suggests something you genuinely need, rather than something it wants to sell you.
Personalisation, when done with care, can make digital life feel a little more human. Like walking into a familiar shop where the person behind the counter knows your name—not because you told them, but because they remembered.
Safety and Risk Reduction
Finally, there’s the question of where we send people—and where we might be better off sending machines. AI now powers systems that operate in deep seas, inside nuclear reactors, on other planets. It keeps watch on infrastructure, looking for cracks in bridges or weak spots in pipelines before they turn into disasters.
In cars, driver-assist systems reduce collisions and ease decision-making. In factories, predictive maintenance can stop a small fault from becoming a dangerous failure. In these cases, AI is a kind of sentinel—quiet, constant, and surprisingly effective.
These aren’t just conveniences. In many cases, they’re quiet revolutions—nudging us toward a world that is faster, safer, more inclusive, and a little more responsive to the needs of those within it.
Of course, every tool has its edge. But it’s worth noting, now and then, just how many of these benefits are already with us—not as theory, but as lived experience.
Shall we look at the other side of the ledger?
AI: Challenges and Concerns
It’s tempting, when a new tool starts working wonders, to focus only on the gains. I’ve done it myself—stood back from a freshly restored piece of furniture and admired the surface without noticing the hidden crack beneath. But with something as powerful as AI, we can’t afford to ignore the underside.
For all its promise, AI comes with challenges—some practical, some philosophical, and many already knocking on the door.
Bias and Fairness
One of the first things you learn when working with materials—wood, data, people—is that nothing arrives in perfect condition. AI systems learn from patterns in data, and that data often carries the scars of old habits and hidden prejudices. If you feed an algorithm the hiring records of a company that’s long preferred men over women, it may quietly learn to do the same. Not because it’s malicious, but because it mirrors what it sees.
Facial recognition systems have been less accurate for people with darker skin. Predictive policing tools sometimes reinforce the very patterns they’re meant to address. These aren’t just technical quirks—they’re reflections of the world we’ve built.
And fixing them isn’t just about tweaking the code. It means involving more diverse voices in the development process, setting clear ethical guardrails, and building systems that are open to scrutiny.
Transparency and Accountability
One of the great frustrations of modern AI is that it often works like a black box: you give it an input, it gives you an output, and what happens in between is... unclear. That’s fine if it’s recommending a film. It’s less fine if it’s approving a mortgage or diagnosing a tumour.
If a decision goes wrong, who do you ask? Who’s responsible? The developer who wrote the model? The company that deployed it? The user who relied on it?
Our laws haven’t quite caught up. And until they do, accountability may feel like vapour— present one moment, gone the next.
Job Displacement
Whenever a new machine enters the workshop, there's a ripple. Some tasks become easier. Others disappear. AI, by its very nature, excels at routine and repetition—the sort of work many people still rely on for a living.
Jobs in manufacturing, customer service, and even transport are already shifting. At the same time, new roles will emerge—just as they did with the printing press or the computer. But they won’t always appear in the same places or suit the same workers.
The challenge isn’t just economic—it’s human. We need to help people move from one kind of work to another, with dignity and support, not slogans and handshakes.
Surveillance and Privacy
AI has made it possible to track people in ways that would have seemed unthinkable not long ago. From facial recognition to behavioural profiling, systems now exist that can watch, record, and infer with chilling accuracy.
Some of these tools are used to keep people safe. Others are used to control, discriminate, or intimidate. The line between protection and intrusion is thin—and often crossed without warning.
We need clear laws, public awareness, and genuine democratic oversight to keep this power from running ahead of our values. Because once the infrastructure is in place, it’s very hard to dismantle.
Misinformation and Manipulation
I used to think of disinformation as a human problem—people twisting the truth for gain. But now, machines can do it at scale. They can write persuasive lies, generate fake videos, and impersonate real voices. And they can do it so well that we may struggle to tell what’s real.
AI doesn’t have motives, but people do—and the tools are now within reach of anyone with a bit of money and intent. We’ve seen social media flooded with falsehoods, bots pretending to be citizens, and deepfakes that blur reality’s edges.
The answer isn’t censorship, but vigilance. Better education, smarter platforms, and a deeper sense of responsibility about how we share and trust what we see.
Dependence and Deskilling
I once met a young apprentice who could sand a chair but had never sharpened a blade. It wasn’t his fault—the tools had changed. But something vital had been lost.
As AI takes on more tasks, there’s a risk we’ll lean too heavily on it. Students may use it to write their essays without understanding the content. Drivers may rely on autopilot until they forget how to react in a crisis. It’s not just about losing skills—it’s about losing judgement.
We need to design systems that support our intelligence, not substitute for it. That complement our strengths and leave space for learning.
Existential and Long-Term Risks
And then there’s the horizon—the deep, speculative questions about what happens if AI becomes not just useful, but autonomous in ways we can’t predict. Could a superintelligent system act in ways that are harmful? Could it misunderstand or override human values?
These sound like science fiction, and maybe they are—for now. But serious researchers are already working on what’s called alignment: making sure that AI, as it grows more capable, continues to act in ways that reflect what we care about.
It’s not paranoia. It’s preparation.
So yes, there are real concerns. But concerns aren’t reasons to stop—they’re reasons to pay attention. Like any powerful tool, AI demands care in its design, courage in its governance, and humility in its use.
If we want to keep it aligned with our better instincts, we’ll need to bring those instincts to the table—now, not later.