The lesson of medicine is that bodies are not designed for understandability or non-planned-for maintenance. You're essentially conceding that humans working on AI code will be as expensive and failure-prone as physicians working on bodies.
“Two ways,” Mike said. “Gradually and then suddenly.”
That's an effect of how profoundly bad we are at understanding exponential growth.
If it were my money at stake, I'd want a some assurance that if AI hits a wall, I won't be left with the equivalent of a nasty autoimmune disease. That is: what are the risks? how will you monitor them? what's your disaster recovery plan?
The problem with a ZIRP is that those questions are b-o-r-i-n-g and you can't compete with those who skip them. You're out of business before they crash. ("The market can remain irrational longer than you can remain solvent.")
Similarly, there's a collective action problem. Our society is structured such that when the optimists' predictions go wrong, they don't pay for their mistakes – rather society as a whole does. See housing derivatives in 2008, the Asian financial crisis of the late '90s, etc. ZIRP makes it cheaper to be an optimist, but someone else pays the bill for failure (Silicon Valley Bank, Savings and Loan crisis)
It's weird to see ZIRP touted as a model, given the incredible overspending that took place, which had to be clawed back once ZIRP went away. (Most notably in tech layoffs, but I'm more concerned about all the small companies that were crushed because of financials, not because of the merit of their products.)
Please extend your analogy to the end of the AI ZIRP environment. Or will line go up forever?
Thank you for this comment, I definitely didn't think through the long-term implications of all of this. It was mostly reacting to the first-order effects I'm noticing as I'm exploring these tools. I'll spend some more time thinking through to the end and see if I can come up with something interesting to say about it.
It's really hard to predict, though, because we're really in the very early stages of the AI ZIRP environment (also because it's hard to predict the non-linear effects!). The models don't show signs of slowing down and we've only found a tiny fraction of the capabilities of even gpt-4-class models. Things are likely going to get more extreme than I even outlined in this post before we start needing to do anything about it.
To your point about medicine and the human body... I kind of look to science fiction for a comparison here. There have been a few out there that explored the idea of us having some technology that could solve health problems and heal people instantly (nanobots, some wand you point at a wound to seal it up, etc). Most usually explore the idea of it being used expertly (which is the lens I'm looking at this technology from for software), but there's also Futurama (among others I imagine?) that explore what it might look like if used by a non-expert in Dr. Zoidberg with lines like "Fry, remind me - disembowling your species: fatal or nonfatal?".
I think there's still a lot to learn and there are going to be a lot of mistakes that will be tough for companies and products to recover from (which has always been the case!). But used expertly I suspect a lot of our long-held beliefs about what constitutes "good software" are going to turn out to be wrong when you have AI writing it for you.
Why would you think it would be hard to predict, though?
I've been on many projects that experienced "accelerated" growth. The tech debt they incur eventually catches up with them, no matter how many experts they employed. There's no reason to think that code generated by today's "AI" won't exhibit the same problems, the AI isn't any better at avoiding race conditions, thundering herds, noisy neighbors, or any of the other "at scale" problems that we've encountered in the last decade or so in "growth at all costs" startups.
None of those projects were ever improved (on technical merits) by writing more code faster.
"AI can untangle any mess" - not even close. Your general point is valid, but only applies in situations where the operator is competent and experienced and already knows how to manage technical debt. Current gen LLMs can't identify when something needs to be refactored and will let debt and indirection pile up until they're incapable of making progress.
Yeah, that part "Current gen LLM's can't identify when something needs to be refactored" is what keeps me optimistic about job prospects of software engineers. Though it's much different than what we're used to. It's much faster to have more of a "bulk and cut" process with LLMs (with a competent and experienced operator!) than to try to grow the software with a perfect design up front.
Genuinely thought provoking. One of the more novel takes I’ve read on the topic, so kudos! Every bit of intuition I have tells me this is wrong, but what if it’s right? Great read.
Without taking a position on the whole issue:
Married to a professor of veterinary medicine, it seems to me comparing the results of AI coding to the results of evolution ought to make you nervous. https://wiki.oddly-influenced.dev/view/welcome-visitors/view/your-body-is-a-gross-kludge
The lesson of medicine is that bodies are not designed for understandability or non-planned-for maintenance. You're essentially conceding that humans working on AI code will be as expensive and failure-prone as physicians working on bodies.
It's worth considering how bad humans are at predicting nonlinear effects (Dörner's /The Logic of Failure/ is good at that, https://www.hachettebookgroup.com/titles/dietrich-dorner/the-logic-of-failure/9780201479485/?lens=basic-books) That's the root of the oft-quoted Hemingway bit:
“How did you go bankrupt?” Bill asked.
“Two ways,” Mike said. “Gradually and then suddenly.”
That's an effect of how profoundly bad we are at understanding exponential growth.
If it were my money at stake, I'd want a some assurance that if AI hits a wall, I won't be left with the equivalent of a nasty autoimmune disease. That is: what are the risks? how will you monitor them? what's your disaster recovery plan?
The problem with a ZIRP is that those questions are b-o-r-i-n-g and you can't compete with those who skip them. You're out of business before they crash. ("The market can remain irrational longer than you can remain solvent.")
Similarly, there's a collective action problem. Our society is structured such that when the optimists' predictions go wrong, they don't pay for their mistakes – rather society as a whole does. See housing derivatives in 2008, the Asian financial crisis of the late '90s, etc. ZIRP makes it cheaper to be an optimist, but someone else pays the bill for failure (Silicon Valley Bank, Savings and Loan crisis)
It's weird to see ZIRP touted as a model, given the incredible overspending that took place, which had to be clawed back once ZIRP went away. (Most notably in tech layoffs, but I'm more concerned about all the small companies that were crushed because of financials, not because of the merit of their products.)
Please extend your analogy to the end of the AI ZIRP environment. Or will line go up forever?
Thank you for this comment, I definitely didn't think through the long-term implications of all of this. It was mostly reacting to the first-order effects I'm noticing as I'm exploring these tools. I'll spend some more time thinking through to the end and see if I can come up with something interesting to say about it.
It's really hard to predict, though, because we're really in the very early stages of the AI ZIRP environment (also because it's hard to predict the non-linear effects!). The models don't show signs of slowing down and we've only found a tiny fraction of the capabilities of even gpt-4-class models. Things are likely going to get more extreme than I even outlined in this post before we start needing to do anything about it.
To your point about medicine and the human body... I kind of look to science fiction for a comparison here. There have been a few out there that explored the idea of us having some technology that could solve health problems and heal people instantly (nanobots, some wand you point at a wound to seal it up, etc). Most usually explore the idea of it being used expertly (which is the lens I'm looking at this technology from for software), but there's also Futurama (among others I imagine?) that explore what it might look like if used by a non-expert in Dr. Zoidberg with lines like "Fry, remind me - disembowling your species: fatal or nonfatal?".
I think there's still a lot to learn and there are going to be a lot of mistakes that will be tough for companies and products to recover from (which has always been the case!). But used expertly I suspect a lot of our long-held beliefs about what constitutes "good software" are going to turn out to be wrong when you have AI writing it for you.
Why would you think it would be hard to predict, though?
I've been on many projects that experienced "accelerated" growth. The tech debt they incur eventually catches up with them, no matter how many experts they employed. There's no reason to think that code generated by today's "AI" won't exhibit the same problems, the AI isn't any better at avoiding race conditions, thundering herds, noisy neighbors, or any of the other "at scale" problems that we've encountered in the last decade or so in "growth at all costs" startups.
None of those projects were ever improved (on technical merits) by writing more code faster.
The only thing we know _for sure_ about the "experts" using "AI" is that the "AI" is _not actually helping_, but people still _imagine_ that it somehow is: https://metr.org/blog/2025-07-10-early-2025-ai-experienced-os-dev-study/
This is basically still the ELIZA effect (https://en.wikipedia.org/wiki/ELIZA_effect) and we've known about it for ~60 years.
"AI can untangle any mess" - not even close. Your general point is valid, but only applies in situations where the operator is competent and experienced and already knows how to manage technical debt. Current gen LLMs can't identify when something needs to be refactored and will let debt and indirection pile up until they're incapable of making progress.
Yeah, that part "Current gen LLM's can't identify when something needs to be refactored" is what keeps me optimistic about job prospects of software engineers. Though it's much different than what we're used to. It's much faster to have more of a "bulk and cut" process with LLMs (with a competent and experienced operator!) than to try to grow the software with a perfect design up front.
I like the cool text in terminal style screenshots you use. Which tool is that?
I think there are a bunch out there, the one I'm using is Code Image (https://codeimage.dev/) because it's open source (https://github.com/riccardoperra/codeimage) so it made it possible for me to customize it a bit with claude code for these posts :)
Your writing style is great. Has some similarities to the writing style of Shaan Puri.
I found your Substack through Hacker News.
Genuinely thought provoking. One of the more novel takes I’ve read on the topic, so kudos! Every bit of intuition I have tells me this is wrong, but what if it’s right? Great read.
> ZIRP taught us that when money is free, the winners are those who borrow the most
I love this take, but… Who would you say the biggest winners were in ZIRP? Number 1 is Adam Neumann, right?