The Pragmatic CTO
The Pragmatic CTO Podcast
Audio: AI Wrote the Code. Who Gets the Tax Credit?
0:00
-5:23

Audio: AI Wrote the Code. Who Gets the Tax Credit?

AI is transforming software development, but it’s also reshaping how we qualify for R&D tax credits. The big question: if AI wrote the code, who gets the credit?

Two top SR&ED consultants look at the same developer using AI tools and come to opposite conclusions. One says calling an AI API or prompting it is routine implementation, not eligible for tax credits, because the uncertainty was resolved by the AI’s training, not the developer. The other says AI doesn’t disqualify the work as long as the developer leads the experimentation. Same developer, same tool, same code—different framing, different documentation, different outcomes. The difference isn’t technical; it’s about how you document the human’s role in the process. And that distinction can be worth millions in refundable tax credits. No government has issued clear guidance, and the existing tests were written with human researchers in mind, not AI collaborators. If your engineering team uses AI tools and you claim R&D credits, your tax position hinges on your documentation. It might work out, or it might not.

Here’s the catch: R&D tax credits used to be a finance problem, but now they’re an engineering problem. Canada doubled its SR&ED expenditure limit to six million dollars and launched AI-powered claim reviews, yet still hasn’t clarified how AI affects eligibility. The US recently restored immediate expensing for R&D and requires more granular disclosure, while deploying AI tools to select audits. Neither the Canadian nor US statutes specify that research must be done by humans, but that silence leaves everything up to interpretation and documentation. Governments are accelerating AI adoption but ignoring the tax credit implications, leaving companies and CTOs to navigate a regulatory vacuum.

And it gets worse. AI boosts developer productivity by automating parts of the work, but that reduces the hours spent on qualifying R&D activities. Since tax credits are tied to wages allocated to research, AI use mechanically shrinks your credit. For example, a Canadian developer who spent half their time on eligible R&D might have generated over forty thousand dollars in tax credits. If AI cuts that qualifying time to 20%, the credit drops by 60 percent. In the US, it’s even trickier due to the “substantially all” rule: if a developer spends less than 80% of their time on qualified research, only a proportional share of their wages count, not 100%. AI can push developers below that threshold, slashing credits. So the same AI that improves your team’s output can erode your tax benefits. The only way out is reframing what counts as qualifying activity—and that lives or dies in your documentation.

Which brings me to the real point: with AI writing more code, traditional evidence like commit histories and code comments no longer prove human-led R&D. The code might be identical, but the process behind it is different. Documentation isn’t just a record anymore; it is the R&D. You have to prove a human drove the investigation: defined the uncertainty before prompting AI, formed a hypothesis, ran experiments by iterating with AI, evaluated results including failures, and advanced knowledge—not just delivered working code. This means developers need to write down what they don’t know, why they chose a particular approach, what prompts and iterations they tried, what didn’t work, and what was learned. Even informal notes in Jira tickets, Slack threads, or PR descriptions can be crucial. AI logs can help, too, since they show the cycle of hypothesis and evaluation. Without this, the tax authorities will see AI-assisted coding as routine implementation.

The key principle all major tax consultancies agree on is that the developer must be the dominant actor. If a developer uses AI as a tool to test hypotheses and systematically investigate a problem—documenting that process—they can claim credits. But if they just ask AI to build a feature, tweak the output, and ship without documenting uncertainty or experimentation, that’s routine work, and no credit. The code alone won’t save you. Auditors want to see that the human drove the research, not just the AI.

I’m not a tax attorney, but I’ve worked on many R&D claims and seen how easily companies trip up. You need to have this conversation with your SR&ED or R&D credit advisor. Ask how AI use affects your eligibility, how to track developer time on qualifying work, and how to document that investigation effectively.

Here’s what you should do now: talk to your tax consultant about AI in your engineering process if you haven’t already. Make sure your developers document their systematic investigation as it happens, not after the fact. Know how much of their time qualifies for R&D credits and whether AI has shifted that balance. Think about the last sprint: if someone prompted an AI tool, iterated until it worked, and shipped, can they prove that was research or just routine implementation? If not, your credit is at risk.

You can read the full article—with all the data and sources—on ThePragmaticCTO Substack.


Read the full article — with all the data and sources — on ThePragmaticCTO.

Ready for more?