There are two kinds of programmers: those who fell in love with the craft—the elegance, the syntax, the satisfaction of writing clean code—and those who just want to build things. Agentic AI coding tools are exposing a divide that’s always been there. For the first time, the second group can build without knowing the craft, and it turns out they’ve been the majority all along.
Programming languages were designed for humans. Creators optimized for human time and understanding—Python emphasizes programmer productivity, Rust fights for clear, helpful error messages, Ruby aims for naturalness, and Elixir is built for joy in coding. But when AI writes the code, these human-friendly features lose their value. AI doesn’t care about readability or elegant syntax; it cares about rigor, type systems, and verifiable correctness. The languages that once seemed “hard for humans” are perfect for machines. The future languages might be verbose, ugly, and heavily typed—not for us, but for AI.
And it gets worse. AI tools mostly generate code in popular languages like Python and JavaScript because they have the most training data. This creates a feedback loop that crushes niche or innovative languages that rely on smaller communities. The very ecosystem that fuels language innovation—human readers and craftsmen who care about design—faces extinction. Without humans to appreciate new designs, language evolution stalls.
There’s also a new phenomenon called “vibe coding”: you describe what you want, let AI spit out code, and if it “feels right,” you ship it. This isn’t some fringe satire; it’s becoming standard. The problem? Understanding the code is optional. We’ve always had copy-paste programmers, but now AI can generate thousands of lines in minutes. The ratio of code written to code understood is collapsing. Success is measured by whether the code compiles and passes tests, not whether it works under real-world conditions.
This leads to what I call abstraction debt—a hidden cost when you rely on code you can’t understand or evaluate. Unlike technical debt, which you know and can fix, abstraction debt accumulates silently. As Peter Naur warned decades ago, changes made without understanding degrade a program’s structure. Today, entire codebases are generated by models that don’t understand intent, maintained by people who never learned the craft. When these systems fail in production, there’s no author to call, no design to consult—just fragile code built statistically, not thoughtfully.
Debugging this AI-generated code is a nightmare. It’s forensic work requiring a mental model of the system. But AI can’t explain its reasoning because it doesn’t reason. It can generate patches, but those patches often introduce new bugs. Senior engineers call this “development hell.” You can vibe your way through features, but you can’t vibe your way through race conditions, security flaws, or memory leaks that only appear under load. The people most enthusiastic about vibe coding are usually those who haven’t had to debug a real, broken system yet.
The pipeline that creates craftsmen—junior roles, mentorship, code review, gradual growth—is collapsing. Studies show software developer employment among 22-to-25-year-olds dropped 20% since late 2022. Companies like Salesforce say they won’t hire new engineers because AI boosts productivity; others fire employees who don’t adopt AI quickly. The message is clear: learn to prompt or don’t bother. But prompting isn’t programming. It doesn’t teach judgment, only code generation. We’re creating a generation that can generate code but can’t evaluate it. The senior developer shortage in the late 2020s is baked in.
Programming knowledge compounds through experience fixing bugs, understanding edge cases, rebuilding systems. AI short-circuits that. If you can ship features without understanding them, why invest the effort? The incentives reward velocity, not comprehension. Data shows a 4x increase in code cloning since AI became common, while refactoring has dropped sharply. Code reuse—the heart of craftsmanship—is eroding. Some startups now have codebases that are 95% AI-generated. Who will maintain those in three years? Who can debug them? The time bomb is already lit.
There’s an illusion that AI-generated code is “good enough.” I don’t buy it. The demos that go viral are the exceptions, not the rule. Positive sentiment toward AI tools is declining; developers spend more time fixing AI code than writing their own. Even impressive achievements, like an AI-generated C compiler, fail on basic tests in practice. Merge rates measure human acceptance, but reviewers are under pressure and often don’t understand the code either. This isn’t engineering; it’s gambling with other people’s money.
This also impacts jobs. The rosy narrative that AI frees developers for interesting problems ignores that most software work is routine. AI taking over 80% of routine work shrinks the market. And if AI progresses, it will encroach on “hard” problems too. Betting your career on AI capabilities plateauing is a risky gamble.
The deeper effects are alarming. Programming teaches systematic thinking—breaking down complexity, anticipating failure modes—a learning process that requires struggle. Remove that struggle, you lose the curriculum. Agentic coding filters out craftsmen, leaving only those who see programming as an obstacle to overcome. The quality problems may take years to surface, but by then the craftsmen will be gone. Institutional knowledge is lost forever.
Ask yourself: how many on your team can debug production issues without AI? When did a junior developer last ship code they wrote and understood? If your senior engineers left tomorrow, who understands your architecture? Are you hiring for code evaluation or just code generation? What’s your plan when the builders move on?
I’m skeptical but not dismissive. In 2026, I’m launching experiments building micro-SaaS products entirely with AI teams to test these ideas firsthand. The future might work—or fail spectacularly. I’d rather find out by building than by guessing.
That comment I read stuck with me: “I never knew there was an entire subclass of people in my field who don’t want to write code.” We were naive to think the craft was universal. The market has chosen its path, but markets have been wrong before.
You can read the full article—with all the data and sources—on ThePragmaticCTO Substack.
Read the full article — with all the data and sources — on ThePragmaticCTO Substack.












