Misuses and abuses of AI

By Roland Carn

Misuses of AI

Any seasoned craftsperson will tell you: a tool is only as good as the hand that wields it. A chisel in a rush can ruin a dovetail. A saw used without care can cut where it shouldn't. AI is no different.

It isn’t inherently good or bad. It doesn’t scheme or sabotage. But when poorly designed, hastily deployed, or used without proper care, it can cause real harm—and often in ways that are hard to spot until it’s too late.

Here are some of the more common ways AI goes astray—not from malice, necessarily, but from neglect, misunderstanding, or misplaced trust.

1. Over-Reliance on Automation

There’s a natural human tendency to delegate. And when a machine seems efficient—faster, cheaper, tireless—we’re tempted to let it take the reins.

But not every task is suited to automation. Take resume scanners. These systems can quickly sift through thousands of applications—but they may also reject qualified candidates based on rigid criteria or missing keywords. In the justice system, risk assessment tools are used to inform sentencing—but if they’re flawed, they can entrench unfairness rather than correct it.

The danger isn’t the tool itself. It’s the blind faith in its output. We must question, interpret, and challenge what the system tells us—especially when the outcomes shape lives.

2. Lack of Transparency

Many modern AI systems work as so-called "black boxes." You feed in data, get a result— but the reasoning in between is hard to trace. That may be acceptable in a music recommendation app, but it’s far less acceptable in medicine, finance, or public policy.

If an algorithm denies a loan, diagnoses a condition, or affects a parole decision, people deserve to know why. If no one—not even the developer—can explain how the conclusion was reached, then accountability starts to slip through our fingers.

In these cases, opacity doesn’t just frustrate—it erodes trust.

3. Inadequate Testing and Oversight

I’ve seen what happens when a piece of furniture leaves the workshop before it's ready— wobbly legs, ill-fitting joints, a polish that hasn’t cured. The same goes for AI.

Rushed development can lead to systems that look good in controlled tests but fall apart in the wild. Maybe the training data was incomplete. Maybe edge cases weren’t considered. Maybe no one asked, “What happens when this fails?”

When AI is deployed without careful testing and ongoing oversight, mistakes aren’t just probable—they're inevitable. And often, the cost is borne not by the system, but by the person depending on it.

4. Reinforcement of Bias

AI doesn’t invent bias—but it can amplify it with chilling efficiency. If a model learns from a world that already contains inequality, it may replicate that inequality in decisions about hiring, education, housing, or healthcare.

An algorithm might recommend fewer advanced courses for students from under-resourced schools—not because it’s prejudiced, but because it reflects the past without questioning it. The result feels objective, even scientific—but beneath the surface, old patterns repeat themselves.

That’s the danger: injustice dressed up as impartiality.

5. Data Mismanagement

AI feeds on data—and lots of it. But how that data is gathered, stored, and used matters deeply.

When personal information is collected without consent, or repurposed in ways people didn’t agree to, trust begins to unravel. If sensitive data is stored insecurely, the consequences can be both personal and systemic. And if no one’s keeping track of how it’s being used, even well-intentioned projects can veer into misuse.

The result isn’t just privacy loss. It’s the erosion of confidence in the technology itself—and in the institutions that wield it.

None of these failures are inevitable. They aren’t signs that AI is malevolent. They’re signs that we, as a society, haven’t always handled it with enough care.

Good craftsmanship takes patience, testing, correction, and feedback. So does good AI. And when we get it wrong, we owe it to ourselves—and each other—to name the harm and put it right.

Abuses of AI

Misuse often begins with good intentions and poor judgement. But abuse—that’s something else. Abuse happens when a tool is turned, quite deliberately, toward harm. It happens when people exploit what AI can do not by accident, but by design.

And while AI doesn’t choose to be used this way, its scale, speed, and opacity can make it an ideal instrument for control, manipulation, and exploitation—especially when paired with existing systems of inequality.

1. Surveillance and Repression

There was a time when watching a population meant recruiting informants and sitting behind binoculars. Now, it takes an algorithm.

Authoritarian governments—and increasingly, some democracies—are using AI to monitor citizens in ways that were once the stuff of dystopian fiction. Facial recognition, gait analysis, and behavioural prediction tools are being deployed to track movements, identify protestors, or flag “unusual” activity. The effect is chilling: a society where silence feels safer than speech.

The danger here isn’t just the technology—it’s the invisibility of it. Surveillance becomes ambient, automatic. And when paired with vague laws and weak oversight, it can turn tools meant to protect into tools of repression.

2. Political Manipulation

AI is also reshaping how we engage with truth—and with one another.

On social media, algorithms curate what we see. Bots simulate human engagement. Targeted ads whisper different messages to different voters. Deepfakes blend real and false until the line blurs. The goal isn’t just persuasion—it’s confusion. Division. Exhaustion.

This isn’t about AI having an agenda. It’s about people using AI to rig the rules of public discourse—to amplify outrage, suppress dissent, and make it harder to find common ground.

3. Technological Colonialism

In many parts of the world, AI arrives as an import—designed elsewhere, trained on unfamiliar data, dropped into unfamiliar contexts. These systems often fail to account for cultural difference, local needs, or social consequences.

They mislabel faces. Misinterpret language. Misrepresent lives.

Worse still, some deployments extract value—whether data, labour, or dependency—without meaningful consent or benefit for those affected. It’s a new form of colonialism, digital this time, but built on old assumptions: that what works in one place should govern another. That efficiency justifies exclusion.

4. Corporate Exploitation

Some companies don’t abuse AI out of malice—but out of habit, or pressure, or profit.

Recommendation algorithms are tweaked to keep us scrolling, even if the content makes us anxious or angry. Scheduling systems are used to manage gig workers with near-total opacity, rewarding availability while erasing stability. Data collection becomes so granular, so relentless, that privacy fades into something abstract—something we barely notice we’ve lost.

These practices aren’t accidental. They emerge from systems that reward short-term gain over long-term trust. And unless checked, they quietly shift the balance of power from users to platforms—from people to code.

5. The Normalisation of Harm

Perhaps the most insidious abuse of all is the quiet kind—the one that becomes routine.

When flawed AI systems are adopted into hiring, policing, or public services, they don’t need to be dramatic to do damage. They just need to operate unchallenged. Over time, their outputs become the new normal: who gets an interview, who gets a loan, who gets watched.

People may not even realise they’ve been filtered out—let alone how or why. And because the system seems objective, their complaints may go unheard.

This isn’t about villainy. It’s about invisibility. It’s about harm hidden beneath metrics, in processes we no longer fully understand—or question.

Abuse of AI doesn’t start with evil code. It starts with power, and with choices. It happens when we build without reflection, deploy without accountability, or profit without conscience.

And it reminds us that if we want AI to serve humanity, we have to ask—which humanity, whose values, and to what end?