Love it! Yesterday morning I went for a long run and started thinking about agents, Alan Kay and LLMs. :) In particular I started thinking about a presentation of his where he talked about building computer systems that are more like biological systems rather than like machines: https://www.youtube.com/watch?v=NdSD07U5uBs&t=1486s
And I started thinking about a system of self correcting LLM agents. All very vague, very unclear ... and now I see that you already built it! :) I'm playing with it now, very interesting to explore!
Ah very cool, I hadn't seen that talk yet. Will watch it tonight :)
Curious to hear how it goes and if there are any features you wish it had! Should have some new updates coming soon to make the system a lot more useful, and will put up a followup post with cool examples. If you build anything you'd like me to share, let me know!
If an object can self-modify, is there anything that fundamentally differentiates it from any other object? Are there aspects of it that are immutable / assigned by the program? I would say the prompt/goal, but that is somewhat mutated by any new message entering the LLM’s context window.
There isn't really anything in theory. In practice I have seen them choose to send a message to one of the available prompt objects specialized for their task rather than giving itself capabilities directly, more often than not. They're also able to modify their own prompt/goal so I could imagine a world where it all converges to a single PO, but I'd be surprised if that happened.
One simple demo I do is to generate a prompt object for summarizing websites that comes with the http_get tool, and then ask a different PO to summarize a particular website. Most of the time I demo it directs the task to the website_summarizer rather than giving itself access to http_get directly.
I'd mostly been thinking of using this for building "agentic" systems in a different way with a different paradigm. I hadn't really thought of giving this capability to basically all objects you see or interact with on a screen. I can definitely see how something like this could be a huge solve for the problems vibe coders run into...
I've got a few things I'm planning on adding next like being able to receive messages from outside the environment and enabling more rich types for messages (images, audio, etc). If you do get a chance to play around and notice something missing, please let me know!
One of, if not the main reason the actor model was interesting is that it made it easy to describe security patterns (capability handles), where you could control access by having actors mediate access to other actors. But LLMs can't be secured (which arguably means the whole agentic web thing can't really happen)
Love it! Yesterday morning I went for a long run and started thinking about agents, Alan Kay and LLMs. :) In particular I started thinking about a presentation of his where he talked about building computer systems that are more like biological systems rather than like machines: https://www.youtube.com/watch?v=NdSD07U5uBs&t=1486s
And I started thinking about a system of self correcting LLM agents. All very vague, very unclear ... and now I see that you already built it! :) I'm playing with it now, very interesting to explore!
Ah very cool, I hadn't seen that talk yet. Will watch it tonight :)
Curious to hear how it goes and if there are any features you wish it had! Should have some new updates coming soon to make the system a lot more useful, and will put up a followup post with cool examples. If you build anything you'd like me to share, let me know!
Semiotics is calling you. Pick up.
Interesting! Have a starting point you'd recommend?
Sure, Paul Kockelman “Last Words: Large Language Models and the AI Apocalypse.”
If an object can self-modify, is there anything that fundamentally differentiates it from any other object? Are there aspects of it that are immutable / assigned by the program? I would say the prompt/goal, but that is somewhat mutated by any new message entering the LLM’s context window.
There isn't really anything in theory. In practice I have seen them choose to send a message to one of the available prompt objects specialized for their task rather than giving itself capabilities directly, more often than not. They're also able to modify their own prompt/goal so I could imagine a world where it all converges to a single PO, but I'd be surprised if that happened.
One simple demo I do is to generate a prompt object for summarizing websites that comes with the http_get tool, and then ask a different PO to summarize a particular website. Most of the time I demo it directs the task to the website_summarizer rather than giving itself access to http_get directly.
Cool! Here’s an example of taking message-passing seriously. https://elite-ai-assisted-coding.dev/p/intellimorphic-ai-agents-and-live-environments
Wow! Yes! I'm excited to dig into this!
I'd mostly been thinking of using this for building "agentic" systems in a different way with a different paradigm. I hadn't really thought of giving this capability to basically all objects you see or interact with on a screen. I can definitely see how something like this could be a huge solve for the problems vibe coders run into...
Ditto, also excited to see prompt objects. I think in general this is an important direction and I haven’t seen many others doing work in this area.
Same! You're the first other person I've come across building along this line of thinking haha
If I could, I'd insert the gif of the aliens in Toy Story going 'ooooooo' here. This is very interesting and makes me want to play!
Trust the fun!
I've got a few things I'm planning on adding next like being able to receive messages from outside the environment and enabling more rich types for messages (images, audio, etc). If you do get a chance to play around and notice something missing, please let me know!
One of, if not the main reason the actor model was interesting is that it made it easy to describe security patterns (capability handles), where you could control access by having actors mediate access to other actors. But LLMs can't be secured (which arguably means the whole agentic web thing can't really happen)