15 Comments
User's avatar
Habib Kakakhel's avatar

This article indirectly means that knowledge workers who are already skilled in a field will have an edge. Because they can act as judges and deciders due to their deeper understanding of the system. Experience matters a lot now.

Expand full comment
Meta Minsky's avatar

Thanks for writing this.

Don't you think that the kind of low-level value judgments you talk about – maintainability, relevance, scope fit of PRs – are the sorts of things an AI will easily automate next?

IMO, the true bottleneck will not be this kind of low-level direction, it will increasingly be global direction: Carving out a vision for a project as a whole, situated within the world; Setting high-level strategic priorities.

AI agents will then eagerly turn these into hundreds of low-level decisions and execute them, meaning the bottleneck will only exist at the top.

Literally published a post on this yesterday (open.substack.com/pub/metaminsky/p/optimize-for-directionality-not-skill), would love to hear your thoughts.

Expand full comment
Scott Werner's avatar

I actually think we're talking about the same thing, though using different terms and at looking at it from different levels.

Definitely agree with you that Directionality and getting human systems to execute the goals are going to be the challenges, but what does it look like to "define worthy goals" in practice? How do you guarantee the AI is staying on task for the worthy goal and not gone slightly off course? If someone is initiating, representing, or bearing risk, won't they want to sign off on what risks they're taking on?

In one of my demos from a few weeks ago - I showed using Aider to make 3 PRs for the same story where 2 of them passed CI and all three had slightly different implementations. You're going to want to review the PRs and choose the one that takes you further on those goals that you and your organization has decided on to keep things on track, right? That's what I mean by maintainability, relevance, and scope.

Expand full comment
Meta Minsky's avatar

I agree, it seems we only have different intuitions about the necessary granularity of control.

I expect AIs to be able to follow larger and larger scale directions coherently, so that you rapidly go from looking at PRs to daily check-ins, to weekly status reports, to quarterly feedback, to more and more total autonomy – within the scope of the initial goals.

Only the initial goals, the global direction itself, is the thing that will presumedly remain a bottleneck, at least for a while.

Expand full comment
Scott Werner's avatar

Ah yeah maybe this will change with future models, but for multi-step activities, I’ve noticed they’re very path dependent. So each step you need to be focused on new things that have been introduced in the code or subtle deviations from the goal, because on future steps that gets added to the context and “prompt” and frequently takes it off track.

I’m sure it’s mentioned elsewhere but Goldratt talks a lot about the importance of moving quality checks as early as possible, but the issue (and core useful feature!) of using LLMs is the hallucinations and non-determinism, so I haven’t been able to find any earlier spot to put the quality check than immediately after the LLM output of each smallest step.

Expand full comment
MachineCoach's avatar

How will we develop judgment, if we don’t do the challenging work of building manually? In software the common advice is to “learn by doing”. How will we learn if AI is “doing all the doing”?

Expand full comment
Habib Kakakhel's avatar

I think bachelors, masters and PhDs in the field should have "learnt by doing" or seeing done. But your insight is spot on. Real practice and real learn by doing is dying.

Expand full comment
Meta Minsky's avatar

Many or even most will not learn it anymore.

Right now, it is still possible to learn, and it will be for a while, until full human redundancy.

Expand full comment
Florida's avatar

I really like how you frame the problem but don’t then dive into “ten hacks for making AI work for you!”

Expand full comment
Simon Torrance's avatar

We estimate that about 50% of knowledge work will be disrupted (require transformation) of which 15-20% will be made redundant by AI agents. That's a big shift. More here: www.ai-risk.co

Expand full comment
Jonas Braadbaart's avatar

Love this! Food for thought, thanks for sharing!

Expand full comment
Graham Rowe's avatar

Great piece. As answers/output gets cheaper, the value of (and pressure to generate) better questions and better decisions goes up. Big shift in work for all of us.

Expand full comment
Graham Rowe's avatar

I also see this happening at the level of the firm. You are a founder with a services business and a 10 person team (accounting firm, digital agency etc). You automate everything and free up your team. Now what?

Typically the move is to try to invent and go to market with a product built off your boutique service insight (eg a super niche accounting services automation). But for that you need loads of taste and judgement, not the project level grinding your previously excelled at but which ai now owns. Tricky!

Expand full comment
CMar's avatar

Hey, just a heads up. The MIT paper you cited was just revealed to be fraudulent. https://economics.mit.edu/news/assuring-accurate-research-record

Expand full comment