Death of a Craftsman: What We Lose When AI Writes the Code
Part I
A few weeks ago, I stumbled across a comment in a discussion thread about agentic coding. The article itself was forgettable. The comment was not:
"I've spent over a decade learning an elegant language that allows me to instruct a computer—and the computer does exactly what I tell it. It's a miracle! I don't want to abandon this language. I don't want to describe things to the computer in English, then stare at a spinner for three minutes while the computer tries to churn out code. I never knew there was an entire subclass of people in my field who don't want to write code."
That last sentence is the one that stuck. Not because it's surprising; because it's overdue. A lot of programmers are going through this realization right now. The thing they spent a decade mastering—the craft itself—was never the point for a large portion of their colleagues.
This is Part 1 of a two-part series. This one is about what we stand to lose.
The Two Tribes
For years, I assumed everyone who became a programmer did so for roughly the same reasons I did. We were all drawn to the elegance of it—the satisfaction of writing something clean, the puzzle-solving, the craft.
Naive assumption.
Programming has always attracted two distinct types of people. There are those who fell in love with the act of programming—the syntax, the patterns, the satisfaction of a well-structured codebase. And there are those who fell in love with building things—the code was just the fastest path to get there. For the first group, programming is the destination. For the second, it's always been transportation.
Neither camp is wrong; both have shipped software that changed industries. But agentic coding is exposing a fault line that was always there. For the first time, the second group has a real alternative. They can build without speaking our language.
And a lot of us are realizing we were always the minority. We just didn't know it because everyone had to pretend to care about the craft to get anything done.
The Craft That Built the Tools
If agentic coding becomes the norm—and 79% of organizations already report some level of AI agent adoption—what happens to the tools built for craftsmen?
Programming languages are designed artifacts. Real people made deliberate choices about what to optimize for; more often than not, they optimized for the programmer, not the machine:
Python: Guido van Rossum made it explicit: "Programmer time is more valuable than computer time." That wasn't a compromise. It was a value statement.
Rust: Graydon Hoare built memory safety without a garbage collector—but he also cared about ergonomics. Rust's error messages are famously helpful; the borrow checker is strict, but the language explains why your code won't compile.
Ruby: Yukihiro Matsumoto designed it to be "natural, not simple." He wanted a language that felt intuitive to humans, even if it meant more complexity under the hood.
Elixir: José Valim built it on the Erlang VM for reliability and concurrency, but he also wanted a language that was fun to write. Elixir's syntax is designed to be expressive and enjoyable.
These languages embedded their creators' values in every design decision—the syntax, the tooling, the documentation. Every choice reflects a belief about how humans should interact with code.
But what happens when humans stop being the primary readers? The features programmers fought hardest for—the ones that made languages more developer-friendly—become irrelevant. Dynamic typing? AI doesn't care about cognitive load. Syntactic sugar? AI parses abstract syntax trees directly. Readable error messages? AI doesn't get frustrated.
What does matter to AI? Type systems. Not because they make code more readable; because they create faster feedback loops. Static types let an agent iterate until the compiler stops complaining. Strict type systems become verification rails for machines that can't reason about intent.
The irony isn't lost on me. Craftsmen who championed type safety and functional purity were right about rigor; they just didn't anticipate the beneficiary. Rust, Haskell, Scala 3—languages that were "hard for humans"—turn out to be ideal for AI verification. The rigor craftsmen demanded is now machine rigor, not human discipline.
The languages of the future—if humans stop being the primary audience—might look nothing like what we have now. Verbose, explicit, heavily annotated with types and contracts. Ugly by human standards. Because beauty was never for the machine.
Beauty was for us.
The Feedback Loop That Starves Innovation
AI doesn't just use existing languages. It reinforces their dominance and kills alternatives.
Languages with the most training data like Python, JavaScript, and Java, are the ones AI generates most fluently. Fluent generation drives more adoption; more adoption produces more training data; more training data makes AI even better at those languages. A self-reinforcing cycle.
Niche, elegant, or experimental languages can't break in. They lack the training corpus. This is a chicken-and-egg problem with no obvious solution; the ecosystem rewards what already exists and starves what doesn't. As Michael Lones has argued, we may be living through the last era of human-first general-purpose languages—the period where languages are still designed primarily for human consumption before AI systems take over code generation entirely.
Think about what that means. Functional programming's long tail—Haskell, OCaml, Elm—depends on communities of craftsmen who value those languages' design philosophies. If AI can't generate them well, adoption stalls. If adoption stalls, the community shrinks. If the community shrinks, the language dies. Not because it was poorly designed; because the feedback loop selected against it.
The entire concept of language innovation is at risk. Nobody designs a new programming language to be read by a machine. They design it to express ideas in ways that existing languages can't. Remove the human reader, and you remove the reason to innovate.
The Rise of Vibe Coding
There's a term floating around: vibe coding. You describe what you want in natural language, let the AI generate the code, and if the output seems to work, you ship it. No deep understanding required. If it vibes, it ships.
When I first heard the term I thought it was satire. It sounds like something a critic would invent to mock our industry's worst impulses. But it's not satire; it's becoming standard practice. The people doing it aren't stupid—they're responding to incentives. They've convinced themselves of something I think is dangerously wrong: that for most software projects, understanding the code is optional.
Vibe coding isn't new. We've always had developers who copy-paste from Stack Overflow without understanding what they're copying. We've always had cargo-cult codebases. The difference now is velocity and volume; agentic tools can generate thousands of lines in minutes. The ratio of code written to code understood is collapsing.
Everyone talks about this like it's working. Like the code being generated is good enough. I'm not convinced. What I see is a lot of demos, a lot of hype, and a lot of people shipping things that haven't been tested by time, by scale, or by hostile users. Research analyzing 567 GitHub pull requests generated using Claude Code found that 83.8% were accepted and merged. Sounds impressive—until you ask: merged into what? Production systems handling millions of users, or side projects where "good enough" has a low bar?
We're measuring success by whether code compiles and passes the tests we thought to write. That's not the same as measuring whether it works.
The Abstraction Debt
Every abstraction in software carries a hidden cost. When you use a framework, you're trading understanding for velocity; when you import a library, you're trusting someone else's implementation. This has always been true. We've always stood on the shoulders of other people's code.
But there's a difference between using an abstraction you could understand if needed and using code you can't understand because you didn't write it and lack the skills to read it.
I call this abstraction debt. It's related to technical debt, but more insidious. Technical debt is code you know is messy—you made a conscious trade-off and can pay it down later. Abstraction debt is code you don't even know is messy, because you never had the ability to evaluate it in the first place.
You can't pay down debt you can't see.
Alexandru Nedelcu has been warning about this. He calls it "comprehension debt" and puts it bluntly: if no one on the team understands the inner workings of a project, you're screwed—and AI agents won't help, because their context windows are limited and vanish between sessions. You won't keep the dialogs; even if you do, that documentation is poor and easily misinterpreted.
This isn't a new insight dressed up in new language. Peter Naur wrote about it in 1985: changes made by people who do not understand the original design concept almost always cause the structure of a program to degrade. Forty years later, we're building systems where nobody understands the original design concept—because there was no designer. There was a model that pattern-matched its way to something that compiles.
When it breaks in production at 3 AM, there's no author to call. There's no intent to reverse-engineer. There's just code that exists because it was statistically plausible, not because someone designed it.
And we're building companies on this foundation.
The Debugging Problem
Debugging is harder than writing code. Always has been. Writing code is creative; debugging is forensic. You're reverse-engineering someone else's logic—even if that someone else was you six months ago—and it requires a mental model of the system: how the pieces fit together, what the original author intended, where the edge cases hide.
Now imagine debugging code you didn't write, don't understand, and that was generated by a process you can't interrogate. The AI can't explain its reasoning because it doesn't have reasoning. It can generate a new solution, sure; but that new solution might introduce new bugs. You're not fixing the system. You're playing whack-a-mole with generated patches, hoping the next iteration happens to work.
Fast Company reported on what senior engineers are calling "development hell"—the cleanup that follows when teams ship AI-generated vibe code into production. The code works until it doesn't; the cleanup lands on the craftsmen who still know how to trace a bug to its root cause.
This is where the "vibe" breaks down. You can vibe your way through feature development. You cannot vibe your way through a race condition. You cannot vibe your way through a security vulnerability. You cannot vibe your way through a memory leak that only manifests under load.
The people most enthusiastic about vibe coding are, almost universally, people who haven't had to debug something broken in production. They'll learn. The question is how much damage accumulates before they do.
The Junior Pipeline Collapse
Craftsmen weren't born. They were made.
Every senior engineer I know came up through the same pipeline: junior role, mentorship, code review, gradually increasing responsibility, years of accumulated failure modes burned into muscle memory. That pipeline is breaking.
A Stanford study analyzing payroll records from ADP—millions of workers, tens of thousands of companies—found that employment for software developers aged 22-25 declined nearly 20% from its peak in late 2022. Not a dip. A structural contraction. The study calls these workers "canaries in the coal mine" for the broader labor market impacts of generative AI.
The industry signals are consistent. Salesforce CEO Marc Benioff announced the company would hire no new software engineers in 2025, citing 30% productivity gains from AI. Coinbase's CEO fired engineers who didn't adopt AI tools within a week of his mandate. The message to junior developers is clear: learn to prompt, or don't bother applying.
But prompting isn't programming. AI accelerates output; it doesn't teach judgment. We're creating a generation that can generate code but can't evaluate it—prompt-only engineers who can direct a model but have no framework for knowing when the model is wrong. And the model is wrong often enough that the Stack Overflow 2025 Developer Survey found more developers now distrust AI output (46%) than trust it (33%).
The pipeline that creates craftsmen depends on junior engineers making mistakes, getting feedback, and building mental models of how systems work. Remove the entry point, and the pipeline starves. The senior developer shortage of 2029-2032 is already locked in.
The Knowledge Gap
Programming knowledge compounds. Strip away the compounding, and you don't just lose skills; you lose the ability to rebuild them.
Every bug you fix, every edge case you encounter, every system you rebuild from scratch—it all accumulates into intuition. That intuition separates a senior developer from a junior one; it's not syntax, it's failure modes. Agentic coding short-circuits that accumulation.
If you can ship features without understanding the code, why invest the time to learn it? This isn't laziness; it's rational behavior given broken incentives. Companies reward velocity, not comprehension. Managers can't tell the difference between code that works and code that works. If the AI can produce something that passes review faster than you can learn to produce it yourself, why learn?
GitClear's analysis of 211 million changed lines of code found a 4x increase in code cloning since AI tools became prevalent—the first time in the history of their dataset that copy/paste exceeded code reuse. Refactoring dropped from 25% of changed lines to under 10%. The ability to consolidate previous work into reusable modules—one of the essential advantages human programmers have over machines—is eroding.
The first generation of vibe coders will probably survive. They'll have colleagues who learned the old way; who can step in when things go sideways. But the second generation? Developers trained entirely on AI-assisted workflows, who never debugged without hints, who never built a mental model of how systems work? And the third generation, trained by the second?
Twenty-five percent of Y Combinator's Winter 2025 batch has codebases that are 95% AI-generated. YC insists these are technically capable founders who chose not to write the code themselves. Fair enough. But who maintains those codebases in three years? Who debugs them when the founder has moved on and the context windows have vanished?
At some point, you have a codebase that nobody fully understands, maintained by people who were never taught how to understand it, supported by AI that doesn't understand it, built for users who will find every edge case you didn't think to test.
That's not a technical problem. That's an organizational time bomb. The fuse is already lit.
The Quality Illusion
AI-generated code is "good enough."
I don't buy it.
What I see is survivorship bias at scale. The demos that go viral are the ones that work. The projects that get written about are the ones that shipped. Nobody's writing blog posts about the AI-generated code that introduced subtle bugs caught only in production; nobody's tracking the long-term maintenance burden of systems built by vibe coding; nobody's measuring the security vulnerabilities that slipped through because the person reviewing the code didn't understand it well enough to spot the problems.
"Good enough" is doing a lot of heavy lifting. Good enough for a demo? Sure. For a prototype? Probably. For a side project with three users? Fine. For a financial system processing millions of transactions? For medical software? For anything where failure has consequences?
Not convinced.
And the data supports the skepticism. The Stack Overflow 2025 survey found that positive sentiment toward AI tools dropped from over 70% in 2023-2024 to 60% in 2025; developers are spending more time fixing AI-generated code than writing their own. People are using tools they don't trust because the market demands it. That's not adoption. That's capitulation.
Even the most recent example of this is the C compiler written by Anthropic's Opus 4.6. The model generated a working C compiler—an impressive technical achievement that made headlines. But when developers actually tried to use it, it couldn't compile a simple "Hello, World!" program. (Anthropic's announcement highlighted the achievement; independent testing consistently reveals the gap between demo performance and real-world reliability.)
The merge rate statistics everyone cites don't measure what matters. They measure whether code passed human review. But the humans reviewing it are subject to the same limitations—reviewing code they didn't write, under time pressure, often without deep understanding of the context. If the reviewer is also vibe-coding their way through life, what exactly is being validated?
That's not engineering. That's gambling with other people's money.
The Economic Reality
This is also about jobs. Nobody in AI leadership wants to say that part out loud; they prefer the euphemism about "freeing developers for more interesting problems."
The narrative goes like this: AI can generate code that's good enough for most purposes, so demand for traditional programming skills will drop. Developers will be freed to work on more interesting problems. Everyone wins.
That narrative is built on wishful thinking.
First, I'm skeptical that AI-generated code is good enough—for the reasons already laid out. Companies are perceiving it as good enough because they can't tell the difference. They're shipping faster; costs are down; the problems haven't materialized yet. But "haven't materialized yet" is not the same as "won't materialize." We're in the honeymoon phase. The real test comes in two or three years, when these systems need to be maintained, extended, debugged, and secured.
Second, the "more interesting problems" argument is wishful thinking layered on wishful thinking. Most software work isn't pushing boundaries. It's building another CRUD app, another internal tool, another e-commerce site. That's exactly the work AI is supposedly good at. If AI takes over the 80% of work that's routine, what's left is a much smaller market; "work on more interesting problems" means "compete for fewer jobs."
Third—and this is the part nobody wants to say out loud—if AI becomes as capable as promised, it will eventually come for the interesting problems too. The argument that humans will always be needed for the hard stuff assumes AI capabilities plateau. That's a big assumption. Maybe it's true. Betting your career on it seems risky.
I don't have clean answers here. I think the profession is going to look different in five years. I think a lot of people celebrating AI coding tools are going to find themselves competing against those same tools. And I think the quality problems are going to surface in ways that aren't pretty.
The Second-Order Effects
The first-order effects of agentic coding are obvious: faster development, lower barriers to entry, reduced demand for traditional programming skills. It's the second-order effects—the downstream consequences that take years to manifest—that concern me more.
The erosion of systematic thinking. Programming teaches you to break complex problems into smaller ones; to think about edge cases, failure modes, how systems degrade under pressure. That mental framework doesn't come from describing what you want to an AI. It comes from struggling with the code yourself, from hitting walls and breaking through them. The struggle is the curriculum. Remove the struggle, and you remove the learning.
The filter effect. Agentic coding separates those who see programming as craft from those who see it as obstacle. The uncomfortable truth is that the obstacle-people might win. There are more of them; they're cheaper; they ship faster. And if the quality problems take years to manifest, by the time anyone notices, the craftsmen will already be gone. You can't rebuild institutional knowledge once it's lost. You can't hire senior developers who understand the fundamentals if nobody learned the fundamentals. The filter doesn't just separate the two groups—it might eliminate one of them.
The incentives are pushing hard in this direction. I don't see much pushing back.
Questions Worth Asking
Before you dismiss this as nostalgia, run an honest audit:
How many engineers on your team can debug a production incident without asking an AI for help?
When was the last time a junior developer on your team shipped code they wrote and understood line by line?
If your three most senior engineers left tomorrow, who on the team understands the architecture well enough to make sound decisions about it?
Are you hiring for the ability to evaluate code, or for the ability to generate it?
What's your plan for the codebase in three years, when the engineers who built it have moved on?
If those questions made you uncomfortable, that's the point.
What I'm Doing
While I still reserve a lot of skepticism for vibe-coding and AI-first trends, in 2026 I'm launching a few experiments by building micro-SaaS products alone using teams of agents to handle the end-to-end building and operation of the business. You can follow them:
StructPR — Code review, reorganized
ShipLog — Feedback board, changelog, and embeddable widget for solo SaaS founders
AuroraGRC — Compliance management for Canadian regulations (partially)
This isn't a contradiction. It's a test. If agentic coding can build and operate real businesses that serve real users, I want to see it work—or fail—with my own codebase. Maybe the quality problems will materialize. Maybe they won't. But I'd rather find out by building than by speculating.
I keep coming back to that comment: "I never knew there was an entire subclass of people in my field who don't want to write code." The implicit assumption was that wanting to write code was universal among programmers. It wasn't. We just didn't have a way to separate the two groups until now.
Industries change. Skills become obsolete. That part is normal. What isn't normal is the speed; we're compressing a generational transition into years, and the institutional knowledge that should carry us through it is the first thing we're discarding.
The market has decided. But markets have been wrong before.
In the next post, I'll look at the other side of this—because the craftsman's guild also failed a lot of people. The 3-12 month IT backlogs. The gatekeeping. The "learn to code or you don't get to build" barrier. There's real loss in what's happening now; there's also something worth examining in what the builders inherit.
But that's Part 2.
For now, I'm going to go write some code by hand. While I still can.


