Yeah, it generates new tasks at the end of each task execution based on the result, after defiling against remaining task list + most relevant past tasks.
AGI
-
AI System Cannot Comprehend Finality Concept
By
–
My system does not comprehend finality pic.twitter.com/MjyELEoUTi
— Yohei (@yoheinakajima) 3 avril 2023My system does not comprehend finality
-
Balancing AI Context Window Size and Summarization Strategies
By
–
It’s a balance of figuring out how much past context you want to pull in. If you pull zero, it stays within limit easily but no prev context. If you pull 5, you get decent context but it can get large. So then you can consider adding a summary step. pic.twitter.com/1jN3J8DMQI
— Yohei (@yoheinakajima) 3 avril 2023It’s a balance of figuring out how much past context you want to pull in. If you pull zero, it stays within limit easily but no prev context. If you pull 5, you get decent context but it can get large. So then you can consider adding a summary step.
-
Autonomous AI Agents Tackle AI Alignment Problem
By
–
I asked my autonomous AI to solve AI alignment.
— Yohei (@yoheinakajima) 3 avril 2023
Fascinating to watch.
Agents:
– Task execution: text-davinci-003
– Task creation: text-davinci-003
– Task reprioritization: text-davinci-003 pic.twitter.com/rmOnKIgaAzI asked my autonomous AI to solve AI alignment. Fascinating to watch. Agents:
– Task execution: text-davinci-003
– Task creation: text-davinci-003
– Task reprioritization: text-davinci-003 -
Relevant Context Management for Autonomous Agents
By
–
The thread broke, but here's a simple example of how "relevant context" (most relevant tasks from past) can be provided to current task. Allowing the autonomous agent to continue generating novel ideas and next steps – that wouldn't fit within a context window.
-
Balancing Context Length, Speed, and Cost in AI Systems
By
–
The decisions you need to make… – how much context you provide
– where/what/how you summarize
– how you break up knowledge
– etc. …are a constant balance of context length, model speed and cost, use case, latency, keeping up w latest tools, and more. -
GPT-5 Safety Testing Strategy Through Controlled Chat Interface
By
–
Let’s imagine that GPT-5 is super-intelligent, but works only via chat interface, its speed is within human abilities to observe in realtime, and OpenAI doesn’t release it to public, but undergoes rigorous testing. Wouldn’t they learn valuable lessons while staying safe?
-
Karpathy Building JARVIS Follows Nakajima at OpenAI
By
–
@karpathy (Building JARVIS @OpenAI) is now following @yoheinakajima
-
Self-modifying AI agent code recursive implications
By
–
No, I mean if the code of this agent/script is being recursively modified by itself?
-
AutoGPTs Will Organize into Specialized Autonomous Organizations
By
–
All of that is just one agent/thread. People coalesce into organizations so they can specialize and parallelize work towards shared goals. Imo this is likely to happen to AutoGPTs and for the same reasons, strung into AutoOrgs, with AutoCEO, AutoCFO, AutoICs, etc.