How dare you (not) use AI
This post is part of a series for #WeblogPoMo2025. Read the introduction here.
The single biggest reason that I'm writing this series of blogs is that the reaction to AI has been so remarkably polarised.
Every single day I'm sent marketing emails from professional services companies who tout that their SaaS platform or market research tool or text analysis tool has an AI feature, or is AI-centric, or somehow IS AI. A good third of those companies have rebranded to have a terrible AI pun in their name.
It's a goldrush.
As a scrappy forty-something I remember the excitement of home PCs, then the web, then smartphones, then apps. Each unlocked something new, enabled something different, was incredibly cool for a moment in time.
I'm not feeling the excitement, at least not yet. For me the overwhelming feeling is concern.
New technologies always create contrarians: the person who will only ever get their news from a newspaper, or refuses to move away from their BlackBerry, or will never purchase something on their mobile phone or use a contactless credit card.
People don't like change. Change is more acutely felt the older you become. Hence Douglas Adams' oft-quoted set of rules in The Salmon Of Doubt:
Anything that is in the world when you’re born is normal and ordinary and is just a natural part of the way the world works.
Anything that's invented between when you’re fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.
Anything invented after you're thirty-five is against the natural order of things.
I am over thirty-five. I suspect the people I read and admire online predominantly are as well. AI doesn't feel incredibly cool, it feels somehow dangerous.
There are so many ethical and practical concerns around AI:
- Doesn't it use loads of power so we're needlessly burning fuel to get to an answer that's far cheaper to google?
- Isn't it trained on stolen data?
- Worse, isn't it effectively just mimicking the stolen data?
- Doesn't that mean those authors and artists are losing their IP and their skillset while simultaneously losing opportunities to create paid work?
- Aren't some of the big AI models and services owned, funded, or run by some absolute monsters?
- Doesn't AI lie?
- Isn't AI constantly and confidently wrong?
- Doesn't AI present a false impression of being intelligent when really it's just guessing how to make complete sentences?
- Isn't there something here incredibly dangerous to how humans think and feel and interact and understand what intelligence is?
My current - uninformed - answer to every one of those questions is:
Yes.
I've seen posts and comments expressing these ideas for a few years now and am pretty entrenched in the belief that generally speaking AI = bad.
But I have two core beliefs:
- Nothing is ever a binary right-or-wrong
- Always question your opinions
Since about 2003 I have maintained a boycott of Nestle1. Though I've never knowingly purchased or consumed a Nestle product since that time I remain open to the idea that one day that boycott will end2.
That boycott is not my identity and does not define me.
I try to be thoughtful in my approach with anything: the clothes I wear, the banks I use, the places I live, the screens or books or devices that I stare at for hours every day. I consider, and I make a balanced choice.
The internet all but destroyed the record shops I love to visit where I chat to the staff and find music I'd never have discovered otherwise. But I still use the internet.
Amazon aggressively undercut other retailers, forcing them out of business and becoming a de-facto monopoly. I do not use Amazon.
I've made these choices. I consider them often. My opinions change, and I hope they change for the best.
Over the weekend Michael Burkhardt published an article about finding a 'middle way' with generative AI:
Like many people, I’m still trying to work out a consistent, rational position on Gen AI. This post is not that by any stretch, just some thoughts.
Michael then goes on to think about a particular topic: AI as theft:
I’m not sure about the term “theft” here. It may be true sometimes but it fails as a blanket characterization of Gen AI.
After outlining some hypothetical situations, he concludes:
There seems to be a continuum ranging from “obviously acceptable” to “obviously unacceptable” in the scenarios I contrived above. But there also seems to be a great big, fuzzy gray area in the middle. And finding our way through that vast middle is where the struggle will be.
(Emphasis mine)
I unequivocally don't want to be in a middle-of-the-road, hey guys can't we all just get along position on AI. Neither does Michael. I like his pragmatism, his ability to say we need to be moving forward.
I don't yet know whether we should kill our darlings, and I don't yet know when (or if) generative AI is good. But I'm putting in the work and remaining open-minded.
Positives and negatives.
This Wikipedia article lists many of the reasons why.↩
I'm optimistic but at this point not hopeful. I really miss Kit Kat Chunkies.↩