softwarecoachnick.

Why AI Is Not Delivering (Yet)

Work is done by humans, not tools.

The AI conversation in software engineering is only becoming more polarised. Depending on who you listen to, AI tools are either:

  • the death of craft and the end of human thought; or
  • a magical infinite productivity machine that will displace everyone

Neither story really matches what we’re objectively seeing. AI coding tools are not eroding the craft of software engineering, although they are changing the face of it. But they’re not magic, and they are certainly not genuinely improving productivity outcomes in the aggregate - yet, at least.

The biggest challenges are not technical, and they’re not just engineers refusing to adapt. The limits are organisational, and these new tools make those limits even more painful.

Worse, the way people are talking about these tools - especially the fear-based “you’re being left behind” narrative - is making the transition harder than it needs to be.

Writing Code Was Never the Job

Many engineers get into this field because they enjoy writing code. That makes sense. It’s satisfying work to see your elegant solution finally come together.

But writing code was never really the job. The job is, and always has been, getting computers to do useful things for people. Code is just the interface we use to tell computers what to do.

I like to think of engineering as building a bridge. You can enjoy pouring the concrete and welding the beams. Many engineers do. You can carve beautiful art right into the pillars of that bridge and take great pleasure in the fruit of your labour.

But the engineering comes down to whether that bridge carries people from one place to another - the details of its construction only matter insofar as it remains able to do that task for as long as people need to cross that river.

The value of our job as software engineers is, and always has been:

  • Users getting value
  • Products working
  • Problems solved

AI coding tools don’t change that. And while this transition is stressful, happening fast, and mired in fear-based rhetoric, you can rest assured that your craft is not dead, and human thought has not ended.

We’re still building bridges - all that’s changed is how much concrete we have to pour by hand.

The English Compiler

Right now AI tools are often presented as simply a new layer of abstraction, like the assembler and compiler before it - no more hand-written machine code! In our case it’s an “English compiler”, one that turns natural language into Java or Rust.

However, this analogy doesn’t hold - when I compile Java, I don’t review the bytecode that’s produced. When I write C or Rust, I don’t read the resultant binary. I trust the compiler.

With AI, we’re not there yet. The “English compiler” cannot be trusted. And you can imagine that if we had to review every line of assembler produced by gcc that the productivity gains we could expect from this magical new productivity machine would be somewhere between negligible and negative.

The models can produce large amounts of code, but humans still have to review it carefully because sometimes it’s just plain wrong.

So we end up in an awkward situation:

  • the machine writes more code
  • but humans still have to verify all of it

While this remains true, we will never see the 100-1000x productivity gains promised by Team Hype. Claude and his mates will remain in the realm of very fancy autocomplete, rather than a full step change.

Still useful! Just not quite the revolution LinkedIn promised you.

Coding Was Never the Bottleneck

Even if AI produced perfect code instantly, most large organisations still wouldn’t move dramatically faster, because coding is often not the bottleneck.

Large software systems are held together by layers of constraints:

  • architecture decisions
  • integration dependencies
  • security and privacy requirements
  • accessibility standards
  • compliance frameworks
  • deployment processes
  • operational risk

These layers exist because large systems are complicated and expensive to break - frustrating as they might be to ambitious “business people”, they exist for a reason, and they’re not negotiable at a certain scale. These layers are often causing problems long before the physical act of writing code ever does.

The really hard parts of software engineering in large organisations are everything around the code, like:

  • understanding the problem
  • clarifying requirements
  • designing systems
  • verifying behaviour
  • integrating safely
  • operating reliably

AI can make local progress very fast, but large organisations need to care about global progress - and in this light, AI is simply not performing. Yet.

By the same line of reasoning, I don’t think large organisations need to worry about genuinely being outcompeted by smaller, more agile units. The possibility that small competitors might take a bite out of larger businesses is very real - it already was - but the simple fact is that any organisation operating at scale becomes burdened by these same constraints. It comes with the territory. So I don’t think we’ll see any FAANGs falling over in a hurry - at least not thanks to AI.

The “10x Productivity” Org Problem

Imagine for a moment we are seeing this 10x productivity boost for engineers. What you will have done is effectively turned your 100 person engineering organisation into the equivalent of 1,000 engineers - or worse, from 1,000 into 10,000 - just without hiring anyone.

All these coordination problems grow with it.

More code means:

  • more architecture decisions
  • more PR reviews
  • more integration risk
  • more operational complexity
  • more governance overhead

The systems that worked for a 1,000-person organisation will not work for a 10,000-person one.

So even if AI were able to deliver the productivity it promises, it doesn’t just make people faster, it fundamentally changes the scale your organisation is operating at. Anybody who’s been through “hypergrowth” in VC funded orgs can imagine how horrifying this thought really is - to 10x your org overnight? Crazy.

I don’t see many people talking about this, and so I suspect many companies aren’t really thinking about these consequences. Yet.

What they’re probably thinking is I could cut 90% of my engineering staff and get the same result - but this would be a monumental mistake. Firstly because humans don’t math, and secondly because of how concentrated the knowledge and context required to operate your organisation would become - remember in this 10x world losing one key staff member is equivalent to losing ten.

Imagine losing your 10 most senior, most tenured, influential and impactful staff overnight! Crazy.

Why Some People Look Like AI Wizards

There’s a key misleading signal in the AI conversation, and it’s those who appear to benefit the most.

If you pay attention, you’ll notice the most impressive examples almost always come from very senior engineers, and that’s not surprising.

Senior engineers have advantages that AI amplifies:

  • deep system knowledge
  • strong business context
  • authority to make architectural decisions
  • freedom to define their own work

They can point the tool at meaningful problems and integrate the results effectively.

Many engineers, especially earlier in their careers, operate inside much tighter constraints:

  • existing architecture
  • narrowly scoped tickets
  • heavy review processes
  • dependencies on multiple teams

So when leadership compares everyone to the output of highly autonomous senior engineers using AI tools, it creates unrealistic expectations and creates a lot of unnecessary anxiety.

A year ago it would be insane to compare the output of an engineer with 20+ years’ experience to one with just 1 - or even 10. Guess what… it’s still insane. But people are doing it. Important people are doing it.

The reality is there’s a lot of money flowing into AI right now, often from the same groups that are invested in software companies. When those companies adopt AI tools, it reinforces the value of those broader investments. Viewed through that lens, the strong push we’re seeing across the industry makes more sense - there are real incentives driving it, they’re just not ours. That can sometimes lead to messaging or decisions from leadership that feel confusing or overly aggressive from the outside, even if they’re internally consistent.

At the same time, I think many engineers are quick to reach for the word “bubble” as a way to make sense of how fast all of this is moving. That reaction is understandable - this is a significant shift, and uncertainty is uncomfortable. But it’s worth being careful not to use that framing as a way to dismiss the tools entirely. There is something real here, even if we’re still figuring out exactly what that looks like in practice.

What It Looks Like When It Works

I’ve proven to myself that these tools can work really well. I’ve been using a spec-driven approach with AI that’s actually seeing that “10x” that people like to talk about.

The workflow looks roughly like this:

  • start with a clear problem and intended outcome
  • refine the spec through structured back-and-forth (Planning Mode + Interview tool in Claude Code)
  • let the tool surface blind spots and missing context
  • once the spec is solid, run a build loop (simple Ralph loop)
  • review the output, fix what matters, generate follow-up tasks
  • run the loop again

It’s not magic, it’s just good engineering that I always would have done - I just don’t write the code any more.

But this is not work done at work, this is work done at home where I have none of the aforementioned constraints. No boss, no stakeholders, no design or brand guidelines, no security or privacy reviews, just me and the endless stream of noise spewing from my prefrontal cortex.

When it works well, it’s extremely effective:

  • time to delivery drops significantly
  • iteration cycles are faster
  • a lot of the mechanical work disappears
  • and because parts of the loop can run unattended, it fits around real life in a way traditional coding often doesn’t

This is the part that is genuinely exciting, and showing great promise. But it’s not happening at work. Yet.

When It Doesn’t Work (And Why)

I recently tried using an AI agent to help migrate a chunk of legacy code to a new system. On paper, this is exactly the kind of task these tools should be great at.

And technically, it worked! I had a plan, I started Claude working on it, and the output looked good - but I never shipped it.

I very quickly realised all I was doing was creating a tonne of work for everyone else:

  • large PRs for a team that’s already at capacity
  • forced some key architectural decisions forward
  • exposed integration risks that needed careful coordination
  • review overhead that outweighs the implementation savings

So instead of helping, it would actually have slowed the system down.

AI can make local progress very fast - but in a large organisation, local progress only matters if it fits into the global system of work.

Otherwise you’re just generating high-quality work that nobody has the capacity to absorb. Our backlog is still there, unshipped - it’s just higher fidelity now.

The Real Risk: A Culture of Fear

The part of this conversation that worries me most isn’t the technology, it’s the messaging.

A lot of communication about AI tools sounds like this:

“Engineers who don’t adopt these tools will be left behind.”

The intention might be encouragement. It might be a legitimately well-intentioned warning based on signals people are seeing in the industry. But what many engineers hear is:

“Your career is at risk.”

Senior engineers tend to shrug that off, roll their eyes, and go “call me when it makes sense”. Junior engineers often don’t, they internalise that fear and that threat - and fear is one of the worst possible environments for learning.

If companies genuinely want engineers to adopt AI tools, the message should be much simpler:

  • here’s what these tools are good at
  • here’s where they struggle
  • here are (relevant!) examples of how people are using them
  • here are opportunities to experiment and learn

Curiosity beats fear every time. But it matters greatly how relevant the demonstrations are - watching a principal engineer connect their cerebellum directly to an agent swarm hive mind is just far too alien, unrealistic, and not applicable to “rank and file” engineers.

What I’ve seen work is when team mates show each other cool things they’ve learned, that are working in their codebases, in their working environment. That’s what spreads adoption and sees genuine productivity changes - not holding up the bleeding edge and shouting “you are being left behind!”

In a culture shaped by fear, the people with the most options are often the first to leave, while others stay out of necessity. Over time, that creates a real organisational risk - you will lose experienced, high-agency engineers and retain those who have, or at least feel they have, fewer alternatives, which gradually weakens the overall strength of the organisation. That eventually taints the reputation of the organisation, making it hard to attract talent back. This kind of change is difficult, if not impossible, to reverse, and will kill your organisation’s ability to innovate and compete in the long run.

Remember: humans do work - not tools.

Why I’m Optimistic

Despite all of this, I’m very optimistic about AI coding tools - and not because they will make us productivity gods, but because they push engineering toward the work that always mattered most:

  • understanding problems
  • designing systems
  • verifying behaviour
  • delivering outcomes for users

We’re still building bridges, the AI just helps pour the concrete faster.

Like most meaningful changes in engineering, though, figuring out how to use these tools well won’t come from hype cycles or executive mandates - it will come from engineers, like you.

So experiment, learn, share, and discover what actually works. These toys are confusing and frustrating but they’re fun to play with.

And ultimately isn’t it exactly that kind of tinkering that got most of us started coding, anyway?

Return to the top 🔝
Link copied!