I wrote this the old fashioned way. Partly to conjure some new neural pathways into being. Partly so I can get my lines right with my kids.
Anthropic researchers just published a study showing that developers who use AI assistance score 17% lower on understanding the code they just wrote. They finished the task slightly faster. They apparently learned almost nothing.
Researchers watched how people worked. Some handed everything to the AI and breezed through. Others asked the AI questions, then wrote the code themselves. The first group was quickest, but only the second group understood what they’d built.
The Anthropic research is not the first to make this point. A study in Nature last year concluded: “Human-generative AI collaboration enhances task performance but undermines human’s intrinsic motivation”.
This is not a small thing.
A Need for Speed?
Sometimes speed is the game. A trader has milliseconds. A pilot in an emergency has seconds. An A&E doctor triaging patients needs answers now. In these contexts, AI supported decision making is already proving invaluable. Faster is better.
But many (most) other forms cognitive work aren’t like that.
Arguably most thought work requiring judgement benefits from sitting with a problem for a time. Turning it over. Feeling the friction of competing considerations.
Often what good work really needs isn’t speed, but the slower work of judgment forming. Judgment doesn’t form when you skip to the answer.
The Formation Problem
Neural pathways are formed by reps and struggle. You don’t learn to write and articulate thought only by reading. You learn by writing badly, repeatedly, until you write less badly. You don’t learn to diagnose problems by watching someone else diagnose them. You learn by being wrong, in painful, memorable ways.
We don’t intuitively always feel good about struggle - it feels like hard work. We’d prefer to skip the part where we’re bad at things. Generative AI offers that skip. And the skip feels great, right up until you need some skill you never built.
The pilots who hand-fly the least are the worst at hand-flying when the automation fails. The doctors most reliant on diagnostic algorithms are slowest to catch what the algorithm misses. The engineers who outsource debugging to AI can’t debug when the AI is confused. We’ve known this pattern for decades. Lisanne Bainbridge called it the “ironies of automation” back in 1983: the more advanced the system, the more crucial the human contribution when things go sideways, and the less capable the human is of making it.
But there’s something else here that bothers me more than the competence question.
A person who has wrestled with hard problems is more likely to have interesting opinions about them. They’ve developed instincts, heuristics, preferences. They can surprise you with an angle you hadn’t considered. They’re engaging to talk to about the thing they’ve struggled with.
A person who’s always had the answer handed to them …well, isn’t. They can tell you what the AI told them. That’s it.
We’re not just risking a less capable workforce. We’re risking less capable and interesting people. Less useful collaborators. Less able to hold a surprising opinion because they’ve never had to form one.
Why Bother?
I paint. Not very well. Oils, mostly. I like the smell of turps. I also make music, badly, for my own amusement.
I have noticed that since Suno came out I lost some motivation to make music - in fact I all but stopped doing it. Not because I started using Suno. I have no interest in Suno as a creative tool, that’s not the point. The point is Suno devalued the musical creative process to zero, and that had the effect of making the effort of music creation feel a pointless. It isn’t pointless and I need to talk myself back into again (just not with Suno).
Someone is probably working on Suno for oil painting. Please stop if you are and go use your (no-doubt prodigious) talents to build something else. But actually here it is clearer for me. I don’t paint to have paintings. I paint because the process in engrossing. The frustration when the colour isn’t right. The moment where you do something unplanned and it looks much better than you intended. The way hyper-focus kicks in and time disappears. I used to get the same feeling from making crappy electronic music, but fortunately the tactile real-ness-in-physical-space of an oil painting is more resilient (for now) to the cheapening process of gen AI. Long may that last.
The point is: none of this good struggle happens when you type a prompt. The output might look similar. The experience is nothing alike.
Humans have made art since we had hands and walls. Cave paintings from 36,000 years ago. Not because our ancestors needed interior decorating. Because there’s something in us that wants to make things. Wants to struggle with materials. Wants to leave a mark that says I was here, and I made this, it was hard.
The difficulty is the point.
Not difficulty for its own sake. Difficulty as the texture of engagement. The thing that makes the breakthrough feel like a breakthrough. The resistance that gives your effort meaning.
What happens when we commoditise the output?
When the thing you can make in 30 seconds with a prompt is indistinguishable from the thing I sweated over for 30 hours, something negative happens to the value of both. Not the market value. The felt value. The meaning.
I’m not worried about AI putting creative people out of work. I’m worried about AI putting creative people out of the experience of being creative. Making the struggle feel pointless.
There is a Choice
None of this is an argument against AI. I use it constantly. I’m running an AI sidekick right now that helps me think, I have an agent doing research, another writing code and an assistant managing aspects my work. They make me better at my job in ways I wouldn’t want to give up.
But we can choose to be deliberate about which parts of our work we hand over.
The stuff that’s mechanical, repetitive, or where speed genuinely matters? Take it. The stuff where the struggle is the learning? Use it more carefully. Use it in every way that helps you learn, just don’t use it to give you the answer.
This might sound like an individual choice, a matter of personal discipline. But it’s not just that. It’s a design choice. A leadership choice. A parenting choice.
If you run a company, you’re making decisions every day about which cognitive work your people do themselves and which they outsource. Get this wrong and you’ll have a team that executes fast and thinks slow. That can ship but can’t adapt. That looks productive right up until the world changes and nobody knows what to do.
If you’re raising kids, you’re deciding what struggles to protect. Your child can use ChatGPT to write their essay. The essay might be good. But if they never learn to structure an argument, to sit with the discomfort of not knowing what to say, to push through the part where writing is hard… what have you taught them?
That the point of writing is to have written?
A Struggle Manifesto
I don’t think there’s one right answer. Different people, different teams, different cultures will draw the line in different places. But I think we need to draw it consciously, rather than letting the logic of efficiency draw it for us.
For me it is something like this:
Offer to AI: Research, summarisation, first drafts I’m going to heavily rewrite, scheduling, formatting, the mechanical parts of coding, anything where I already know what I think and just need it executed.
Take from AI: challenge, thought partnership, different perspectives.
Keep for myself: Arriving at the answer. Critical thinking. The creative work I do for joy. The writing where I’m trying to figure out what I believe. The reps that build the skills I want to have.
Protect for my kids: The experience of not knowing something and having to figure it out. The satisfaction of making something through hard work. The understanding that difficulty is often the texture of meaning, not an obstacle to it.