The seed library metaphor is the best framing I've seen for this. Especially the ending — the librarian can see the gardens are better but can't see what's in them.
I've been running into a version of this at the practice level, not code. When you build a governed AI workflow — decision logs, cross-project handoffs, standing policies — the same fork-and-customize dynamic kicks in. Everyone builds their own version. The gardens are thriving. But nobody can see what's working across them.
Your librarian's corkboard is the right instinct. I keep wondering whether the missing piece isn't a platform for sharing artifacts but one for making the patterns legible — what changed, why, and what it revealed.
That’s really interesting! I’ve been having conversations to try to figure out what that platform might be and I’ve been struggling. Can you share a bit more about what you have in mind for making the patterns legible? Would love to hear more!
That's the question I keep circling back to. The short version: I think legibility comes from persistence, not observation. You can't make patterns visible by watching for them in real-time — you need a structure that accumulates decisions over time so the patterns reveal themselves.
What I've been building is essentially a file-based governance layer that sits between the operator and the AI. Not a prompt library — more like institutional memory. Every decision gets logged, constraints compound across sessions, and the system starts reflecting the operator's judgment back at them. The patterns become legible because they're recorded, not because someone's looking for them.
I actually wrote up the longer version of this thinking yesterday if you're curious:
Would love to hear whether it maps to what you've been exploring. Your architecture piece on complexity and material hit a similar nerve from a different angle.
It seems like fork and forget is the problem. If the original repo were able to keep a list of forks, and check on them, perhaps agentic-ally periodically, then that might help build a kind of map, or graphs, with forks as the edges and potential merge candidates and nodes.
Great piece - it is interesting to think about how to capture all the edge case knowledge which is so valuable and now going to be so fragmented.
I wonder if there is a world where a free coding agent somehow creates a distributed feedback loop - use the agent for free, in exchange your micro-app goes on the map (as do any updates)
Huh. If your prophecy comes to pass, GitHub itself would be best placed to adapt. It would probably look something like 'use AI agents to monitor forks of a repo, and forks-of-forks (etc) and analyse code changes, commit messages, and PRs to understand the idea behind changes'.
Could that be the foundation of a meaningfully useful premium service? Further accelerate agentic development by having GitHub's agent expose existing changes in the ecosystem to coding agents. Track the development of variants, which might gradually become 'dominant' as the power of natural (ish!) selection kicks in (literal survival of the fittest code), or monitor 'speciation' as codebases evolve in different directions to fit different niches.
Yeah I agree GitHub would probably feel like the natural place something like this would live. I don’t know much about their corporate strategy, but from my experience in big tech and from what it looks like on the outside with them trying to compete to be the agentic development solution and giving up on “social coding”… it’s probably something that will need to find traction outside and then get acquired rather than grown internally.
I wonder how possible it is to crawl GitHub and watch for forks to build this from the outside…maybe you’d need people to opt in to watch their public forks and grow from there?
The seed library metaphor is the best framing I've seen for this. Especially the ending — the librarian can see the gardens are better but can't see what's in them.
I've been running into a version of this at the practice level, not code. When you build a governed AI workflow — decision logs, cross-project handoffs, standing policies — the same fork-and-customize dynamic kicks in. Everyone builds their own version. The gardens are thriving. But nobody can see what's working across them.
Your librarian's corkboard is the right instinct. I keep wondering whether the missing piece isn't a platform for sharing artifacts but one for making the patterns legible — what changed, why, and what it revealed.
That’s really interesting! I’ve been having conversations to try to figure out what that platform might be and I’ve been struggling. Can you share a bit more about what you have in mind for making the patterns legible? Would love to hear more!
That's the question I keep circling back to. The short version: I think legibility comes from persistence, not observation. You can't make patterns visible by watching for them in real-time — you need a structure that accumulates decisions over time so the patterns reveal themselves.
What I've been building is essentially a file-based governance layer that sits between the operator and the AI. Not a prompt library — more like institutional memory. Every decision gets logged, constraints compound across sessions, and the system starts reflecting the operator's judgment back at them. The patterns become legible because they're recorded, not because someone's looking for them.
I actually wrote up the longer version of this thinking yesterday if you're curious:
https://theintelligenceengine.substack.com/p/the-workspace-layer-what-sits-between
Would love to hear whether it maps to what you've been exploring. Your architecture piece on complexity and material hit a similar nerve from a different angle.
Here's to the map! Also, I like the story and interleaved format, very nice metaphor and compliments the point.
Here’s to the map!
And appreciate the note, I went back and forth on whether to keep the story, glad you enjoyed it :)
It seems like fork and forget is the problem. If the original repo were able to keep a list of forks, and check on them, perhaps agentic-ally periodically, then that might help build a kind of map, or graphs, with forks as the edges and potential merge candidates and nodes.
Great piece - it is interesting to think about how to capture all the edge case knowledge which is so valuable and now going to be so fragmented.
I wonder if there is a world where a free coding agent somehow creates a distributed feedback loop - use the agent for free, in exchange your micro-app goes on the map (as do any updates)
As a non developer, exploring how to think about lots of AI use cases, this really made me think - thanks.
Huh. If your prophecy comes to pass, GitHub itself would be best placed to adapt. It would probably look something like 'use AI agents to monitor forks of a repo, and forks-of-forks (etc) and analyse code changes, commit messages, and PRs to understand the idea behind changes'.
Could that be the foundation of a meaningfully useful premium service? Further accelerate agentic development by having GitHub's agent expose existing changes in the ecosystem to coding agents. Track the development of variants, which might gradually become 'dominant' as the power of natural (ish!) selection kicks in (literal survival of the fittest code), or monitor 'speciation' as codebases evolve in different directions to fit different niches.
Yeah I agree GitHub would probably feel like the natural place something like this would live. I don’t know much about their corporate strategy, but from my experience in big tech and from what it looks like on the outside with them trying to compete to be the agentic development solution and giving up on “social coding”… it’s probably something that will need to find traction outside and then get acquired rather than grown internally.
I wonder how possible it is to crawl GitHub and watch for forks to build this from the outside…maybe you’d need people to opt in to watch their public forks and grow from there?
you can just fork things
This hits. Bullseye.
Things really are changing fast all of a sudden, aren’t they.