The Pragmatic CTO
The Pragmatic CTO Podcast
Audio: The Tommyknocker Developer
0:00
-6:03

Audio: The Tommyknocker Developer

AI tools have exploded developer productivity, but they're also driving a dangerous gap between capability and understanding. We’re shipping more code than ever—code we often don’t fully grasp—and that disconnect is a ticking time bomb for maintenance, security, and long-term success.

Stephen King's The Tommyknockers imagined a town transformed by alien tech that made people wildly inventive but without any true understanding of their inventions. They built gadgets that worked, but they didn’t know why. That’s the future of software development in 2025. Nearly 60% of developers now ship AI-generated code they don’t understand, and 42% of AI-produced code fails silently—passing tests but producing wrong results. We’re the canaries in the coal mine, but this isn’t limited to engineering. Lawyers submit briefs with hallucinated citations they can’t verify, consultants deliver AI-generated decks their analysts can’t defend, and doctors lose diagnostic skills after relying on AI. The tools work, but the understanding doesn’t follow.

The capability explosion is real and undeniable. GitHub Copilot generates nearly half the code its users write. Studies show a 4x increase in coding velocity and entire startups building codebases 95% AI-generated. AI accelerates prototyping, boilerplate, and exploration—what I call Edison mode development: generate, test, iterate at machine speed. At LiORA, AI tools are part of the workflow. They speed up real work. But capability and understanding are not the same. You can build a house without understanding load-bearing walls, but that house won’t stand when you need to modify it. Before AI, writing code was how you developed understanding. Now, production can happen without comprehension. That decoupling is the core challenge.

The numbers tell a troubling story. Over half of developers use AI code they don’t understand, and many don’t even review it because it takes longer. Architectural flaws and security vulnerabilities are skyrocketing. Addy Osmani calls this the 70% problem: AI tools get you about 70% of the way fast, but the last 30%—the hard edge cases, design, security, maintainability—requires real understanding. Junior devs often skip this and get stuck; seniors use AI to accelerate what they already know. And this extends beyond code. Lawyers have faced sanctions for AI-generated fake citations; endoscopists lost skill rapidly when AI was removed; consultants deliver decks they can’t defend. This isn’t just a software issue; it’s a knowledge worker problem where software is the canary showing us the consequences.

Peter Naur foresaw this in 1985 when he called programming "theory building." The code is just a lossy artifact of the mental model held by its creators. When people who lack this theory make changes, the system decays until it’s unmaintainable. AI-assisted development is creating that scenario at scale: the AI has no memory or understanding of what it produces. Forty years ago, Naur predicted this failure mode. We’re living it now.

There are two ways to develop: Edison’s trial-and-error empirical approach and Einstein’s theoretical, principled reasoning. AI supercharges Edison mode—generate, test, regenerate, speed through volume. That’s great for prototyping but insufficient when you need to understand why something works, to maintain, debug, or explain design decisions. That’s Einstein mode, and AI tools don’t help there. The trap is relying entirely on Edison mode. You build fast but create systems nobody understands. Simon Willison calls this "vibe coding"—building without review or comprehension. The gap between prototype and production is where teams get stuck. Edison mode gets you 70%; Einstein mode is the rest, and if you never built that understanding, you can’t move forward. You end up with unmaintainable systems.

Stephen King’s Haven goes through three phases: euphoria, dependency, and then consumption where the technology destroys the people who depended on it. We’re in phase two—dependency—now. Skills atrophy as teams rely on AI. This has parallels in other domains: GPS users lose spatial memory, pilots lose manual skills, and knowledge workers cede problem-solving to AI. The difference is that AI-generated failures are silent. Fake legal briefs look real. AI code passes tests but produces wrong results. You don’t know understanding is gone until you need it most. That silence is the danger.

At LiORA, we’re conservative with AI. We use it for code reviews, research, and content, but not code generation. The output is too sloppy for our context. For personal projects, I experiment with aggressive agentic coding but only with strict guardrails: detailed context files and tests outside AI’s reach. The agent must pass these independent checks to prove understanding. My bet is that these guardrails are the Einstein layer—enforcing understanding even if not every developer holds the full theory. We’ll see if that holds.

If you’re a CTO, ask yourself: How many engineers can explain why the code they shipped works the way it does, not just what it does? If AI tools stopped tomorrow, could your team maintain their systems? When was the last time a dev traced a production issue without AI? Are you hiring for velocity or understanding? AI can give you velocity. Understanding is the human part.

This trajectory isn’t unique to engineering; every knowledge function is breathing the same gas. The Tommyknockers didn’t know what they were building or why it worked—they just knew it was powerful. Haven paid the price for that gap. The question for every CTO is whether your team is building understanding or just building.

You can read the full article—with all the data and sources—on ThePragmaticCTO Substack.


Read the full article — with all the data and sources — on ThePragmaticCTO.

Discussion about this episode

User's avatar

Ready for more?