<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[The Pragmatic CTO: Quick Takes]]></title><description><![CDATA[Quick thoughts on tech trends and decisions. Less formal than the main newsletter, more useful than most industry commentary.]]></description><link>https://www.thepragmaticcto.com/s/quick-takes</link><generator>Substack</generator><lastBuildDate>Sat, 18 Apr 2026 13:11:59 GMT</lastBuildDate><atom:link href="https://www.thepragmaticcto.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Allan MacGregor 🇨🇦]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[thepragmaticcto@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[thepragmaticcto@substack.com]]></itunes:email><itunes:name><![CDATA[Allan MacGregor 🇨🇦]]></itunes:name></itunes:owner><itunes:author><![CDATA[Allan MacGregor 🇨🇦]]></itunes:author><googleplay:owner><![CDATA[thepragmaticcto@substack.com]]></googleplay:owner><googleplay:email><![CDATA[thepragmaticcto@substack.com]]></googleplay:email><googleplay:author><![CDATA[Allan MacGregor 🇨🇦]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Maybe OpenClaw Needed This]]></title><description><![CDATA[The OpenAI acqui-hire of OpenClaw is getting predictable reactions from two camps: "open source capture" from one side, "security nightmare validation" from the other.]]></description><link>https://www.thepragmaticcto.com/p/maybe-openclaw-needed-this</link><guid isPermaLink="false">https://www.thepragmaticcto.com/p/maybe-openclaw-needed-this</guid><dc:creator><![CDATA[Allan MacGregor 🇨🇦]]></dc:creator><pubDate>Mon, 16 Feb 2026 15:18:14 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/093783e9-8f05-4148-8bea-098a1cac0671_1536x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The OpenAI acqui-hire of OpenClaw is getting predictable reactions from two camps: "open source capture" from one side, "security nightmare validation" from the other. What's missing from both takes: this might be exactly what OpenClaw needed. Viral hype, one developer burning $10-20K monthly, 1.5 million deployed agents with real security problems that a solo project couldn't solve. Sometimes Big Tech acquisition is the right answer.</p><p>Consider what OpenClaw achieved and what it cost Steinberger to maintain it &#8212; 180,000 GitHub stars in three months, the fastest-growing open-source project in GitHub history, 1.5 million agents deployed in the wild. He built the first prototype in an hour, then found himself maintaining viral-scale infrastructure while bleeding five figures every month. The security establishment raised legitimate concerns: twenty percent of the skills marketplace was malicious, secrets were stored in plaintext, and the permission model broke every traditional security assumption about least-privilege access. One talented developer wasn't going to solve enterprise security architecture, build sustainable infrastructure, and maintain community velocity at the same time.</p><p>Steinberger's own framing matters here: "What I want is to change the world, not build a large company, and teaming up with OpenAI is the fastest way to bring this to everyone." He insisted on the foundation model specifically &#8212; OpenClaw stays open source, the community continues building, but he gets the resources to architect what comes next.</p><p>Compare the alternatives he had on the table. Meta's pitch was to turn OpenClaw proprietary, layer it on their infrastructure, and build agentic commerce on top of three billion users. OpenAI's pitch: keep it open, establish the foundation, bring Steinberger in to design the next generation with actual engineering resources behind him. For someone who built PSPDFKit to a 100 million euro outcome and understands open-source sustainability economics, the choice tracks.</p><p>The security problems were real and growing faster than one person could address them. Twenty percent malicious skills in the marketplace; plaintext credential storage in home directories; permission models that Cisco, CrowdStrike, and Sophos correctly identified as fundamentally broken for autonomous agents. OpenClaw needed dedicated security engineering, infrastructure designed for scale, and governance frameworks that could actually constrain agent behavior &#8212; not just more GitHub issues and community PRs from well-meaning contributors.</p><p>The foundation model directly addresses the "capture" concern that has everyone worried. Steinberger could have taken Meta's offer, gone fully proprietary with a massive user base built in, and secured a significant exit. Instead: open source continues, OpenAI commits to support the foundation, and the community maintains access to the project that went viral. It's the Chrome/Chromium playbook, which deserves its criticisms around governance and influence, but it's categorically different from "promising startup gets acquired and shut down."</p><p>Not every open-source project needs to stay solo to stay pure; some ideas hit a scale where they need institutional backing to reach their potential without collapsing. OpenClaw hit viral velocity before it had infrastructure that could support that velocity, and Steinberger was funding the gap personally while the security problems multiplied. The real question wasn't "acquire or stay independent" &#8212; it was "which acquisition structure preserves what made this valuable while solving the sustainability and security crisis."</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.thepragmaticcto.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">The Pragmatic CTO is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The real test is what happens in the next six months. Does the foundation maintain actual independence, or does it become a rubber stamp for whatever OpenAI wants? Does OpenAI's internal agent work stay aligned with the open-source version, or do they diverge into proprietary territory? Does the security architecture get rebuilt with proper engineering resources, or does it get ignored because shipping agents is more important than securing them? We'll know soon enough.</p>]]></content:encoded></item><item><title><![CDATA[Your AI Tools Aren't Making You More Productive]]></title><description><![CDATA[But they could be, if you take of the fundamentals first]]></description><link>https://www.thepragmaticcto.com/p/your-ai-tools-arent-making-you-more</link><guid isPermaLink="false">https://www.thepragmaticcto.com/p/your-ai-tools-arent-making-you-more</guid><dc:creator><![CDATA[Allan MacGregor 🇨🇦]]></dc:creator><pubDate>Fri, 06 Jun 2025 17:01:49 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/6e50c7f8-3186-41c4-8db9-44c42b978244_3999x2666.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>It seems today everyone is talking about AI and betting hard on it delivering significant productivity gains. I've heard so many versions: it's making engineers 10x more productive, 100x more productive; it'll replace entire software development teams; it's revolutionizing content managers, product managers, project managers&#8212;you name it.</p><p>There are even companies drinking the Kool-Aid in one gulp, declaring themselves "AI First" without ever defining what that actually means.</p><p>Here's what I suspect is actually happening: your AI tools aren't making you more productive. They're making you busier.</p><p>I think teams are likely spending more time on training, technical debt cleanup, and integration overhead than they're saving on code generation. Here are the key areas where I see problems arising:</p><ul><li><p><strong>The training tax seems inevitable.</strong> Senior developers likely need weeks to learn effective prompting. Junior developers risk becoming dependent on AI suggestions without understanding underlying patterns. Code review processes probably become more complex when you're debugging both human logic and AI-generated assumptions.</p></li><li><p><strong>Then there's the data problem.</strong> Teams often assume their codebase is clean enough for AI to understand. Based on my experience with legacy systems, I suspect it's not. Teams probably spend months cleaning up documentation and refactoring code so AI tools can provide useful suggestions. That's not productivity&#8212;that's paying off technical debt you should have addressed years ago.</p></li><li><p><strong>The security implications worry me most.</strong> Based on my experience in regulated industries, adding AI tools means adding new attack vectors. Your threat model expands to include prompt injection, data leakage through AI APIs, and the challenge of auditing AI-generated code for compliance.</p></li></ul><p>I suspect we will see many teams trade immediate coding speed for long-term maintenance complexity; and even worse they might not realize they are doing this trade.</p><p><strong>Do I think there is no value or productivity gains to get from AI?</strong> No, I think the productivity gains can be real&#8212;just not for most teams yet. The companies actually seeing value from AI aren't the ones with the flashiest implementations. <strong>They're the ones who did the boring foundational work first.</strong></p><p>The fundamentals that actually matter:</p><ul><li><p><strong>Clean data</strong> - AI can't fix messy, inconsistent data</p></li><li><p><strong>Clear documentation</strong> - If your team can't understand your codebase, AI won't magically fix that</p></li><li><p><strong>System understanding</strong> - Teams need to be able to validate AI output, not just accept it</p></li><li><p><strong>Quality processes</strong> - If your documentation is scattered or outdated, AI tools will just hallucinate more confidently</p></li></ul><p>The productivity gains are real, but they come after you solve the integration problems no one wants to talk about.</p><div><hr></div><p><em>What's your experience with AI tool adoption? Are you seeing the promised productivity gains, or spending more time on the integration overhead?</em></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.thepragmaticcto.com/p/your-ai-tools-arent-making-you-more/comments&quot;,&quot;text&quot;:&quot;Leave a comment&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.thepragmaticcto.com/p/your-ai-tools-arent-making-you-more/comments"><span>Leave a comment</span></a></p><p></p>]]></content:encoded></item></channel></rss>