The Bottleneck Is Not the Tool

A developer spends an hour tracing through logs and reading error messages, tracking down a bug the way they always have. The AI tools are open on their screen. It does not occur to them to paste the error message in and ask. Not because they distrust the tool. Not because they think it would not work. Because this is how debugging works.

I keep seeing this. I have done this myself. Not resistance. Not refusal. Just habit. The familiar path is not chosen. It is followed by default.

There is an important conversation happening right now about how fast AI capability is advancing. The benchmarks are tumbling; the task horizons are doubling every few months. The capability curve is well documented. But it is not the story I find most urgent.

The story I find most urgent is what happens at human speed - in the team, in the culture, in the daily work. The tools are ready. The people are still catching up. And the rate at which they catch up has very little to do with the tools themselves.

Three speeds

I see three things changing at three different rates.

Software development is the fastest. AI capability is compounding on a remarkably consistent curve. METR’s task horizon data shows AI doubling the length of work it can handle autonomously roughly every few months - from seconds in 2019 to hours in early 2026. The tools improve whether we are ready or not.

People are the middle speed. Developers can learn and adapt, but it takes time, practice, and the right conditions. This is not a technology problem. It is a learning problem.

Organizations are the slowest. Structures, processes, incentive systems, culture - these are the heaviest things to move. They were designed for a different pace and they do not update themselves.

The gaps between these three speeds are growing.

What I see in teams

People adopt AI tools in ways that look productive on the surface but avoid real change. The tools are present. The workflows look updated. But when you look closely, developers are working around the new capabilities rather than through them. The output looks AI-assisted. The thinking behind it has not changed.

When AI-assisted work goes wrong - a bug ships, a generated approach turns out to be fragile - people distance themselves from the failure rather than treating it as information. “The agent did that” becomes a way to avoid examining what went wrong in the collaboration, not just in the output.

And the debugging scene I opened with is the most common version: people do not experiment enough. Not because they are lazy or uncurious, but because the familiar path is what the culture has always rewarded. Ship what you know works. Minimize risk. Do what was encouraged before.

The problem is that “before” was three months ago, and the landscape has shifted.

Why urgency does not help

There is a temptation to respond to the pace of AI advancement with alarm. The capability curve is steep. The acceleration is real. The natural response is to say: move faster. Adapt now. You are falling behind.

I think this response makes the problem worse.

When people feel the ground disappearing beneath them, they do not experiment more. They retreat to the familiar harder. Alarm about capability is exactly the kind of signal that destroys the psychological safety people need to actually learn. It makes them perform adoption rather than practice it.

And the framing of humans versus AI misses something fundamental. AI has to be told what to do: what problems to solve, what constraints matter, what “good” looks like. As execution gets cheaper, the thinking about what to build and why becomes more important, not less. There is always a human in the game - not competing with AI, but directing it.

This means adaptation is not about keeping up with AI capability. It is about getting better at the parts that remain essentially human: problem framing, judgment, deciding what matters. The teams I see adapting fastest are the ones where it is safe to be slow, to try things, to not yet know.

The environment sets the pace

I do not think slow adaptation is an individual failure. People respond to the incentives and signals around them. If the culture rewards predictability and penalizes mistakes, developers will not experiment - regardless of how fast the tools allow them to.

Psychological safety has been discussed for years. But the shape of it has changed.

The old framing was about being safe to raise concerns, push back on decisions, or admit mistakes. The new pressure is different.

It is about being safe to not yet know how to do your job the way you did it last month. That is a more personal kind of vulnerability. Your core skill - the thing you were hired for, the thing you built your professional identity around - is being reshaped in real time. Acknowledging that you are still learning how to work in this new way requires a kind of safety that most organizations have not explicitly built.

The organizational structures reinforce the problem. Review processes, planning cadences, decision-making channels - these were designed for a different pace and have not caught up.

The people responsible for updating these structures - managers, directors, organizational leaders - are working in a domain that has not accelerated the same way. Leaders use AI for communication and analysis. But the core of leadership work - building trust, reading team dynamics, redesigning how teams operate - remains essentially human-speed work.

What I notice but cannot yet explain

A few observations that feel connected, though I have not found the thread:

Teams where leadership actively uses AI tools - even imperfectly - seem to adapt faster than teams where leadership delegates AI adoption to the engineering team. Is this because leaders who experience the tools firsthand understand the pace of change viscerally? Or is it selection bias - leaders willing to experiment tend to create more adaptive cultures regardless?

The developers who adapt fastest are not always the most technically skilled. Often they are the ones most comfortable with ambiguity and change. If the bottleneck is disposition rather than technical learning, can the right environment cultivate that? Or does it mostly self-select?

Where this leads

The capability conversation matters. But capability without adaptation is just potential. A country full of geniuses is only useful if the people around them can figure out how to work with them. And someone still has to decide what problems the geniuses should work on.

If adaptation is shaped by environment more than by individual effort, the lever is culture, incentives, and structures - and the people who shape those are leaders. But leaders themselves face the mismatch: they are being asked to redesign processes and create new cultural norms at a pace that their own domain has not accelerated to match. This is not a failure of leadership. It is a structural tension that does not have an obvious resolution yet.

What I am fairly sure of: the teams that are adapting fastest are not the ones with the best tools or the most alarmed leadership. They are the ones where the environment makes it safe and rewarding to learn in public.

What I am less sure of: how to build that environment deliberately, rather than stumbling into it.


— Gerriet Backer, ellamind