More or less, but being able to interact w the world is probably the best way to learn and evolve
AGI
-
Simplifying AI Agent Architecture with OpenAI Text-Completion
By
–
Basically stripping it down to a barebones template only using OpenAI text-completion for all three task agents (execution, creation, prioritization). I think it makes sense to clean it up like this, and then add back on complexity – like Langchain, etc.
-
Simplifying AI Agent Architecture by Removing Langchain Framework
By
–
Taking a pause from adding complexity to the agent as it's harder to debug.
— Yohei (@yoheinakajima) 2 avril 2023
Took it an opposite direction and stripped out Langchain for now. It can't engage with APIs now, but can still perpetually build and execute tasks (just writing and "thinking"). pic.twitter.com/rJKDXZEchbTaking a pause from adding complexity to the agent as it's harder to debug. Took it an opposite direction and stripped out Langchain for now. It can't engage with APIs now, but can still perpetually build and execute tasks (just writing and "thinking").
-
Task Management AI Agent Execution and Reprioritization Issues
By
–
It then continued to:
– execute tasks
– create new tasks
– reprioritize tasks Task mgmt agent is a bit wonky, so need to fix that… btw, all text-davinci-003 cuz… that's what i know -
Projecting Our Values onto More Intelligent Systems
By
–
seems wrong to project our own values/frameworks on a being that is more intelligent than us we have zero understanding (and probably never will) of what a more intelligent system would want/desire otherwise we would be the more intelligent system
-
Agency Mimicry vs Authentic Experience in AI Models
By
–
one question… we know that the appearance of agency is necessary for many consumer-facing job functions yet to be replaced so is it possible to reach a point of perfect mimicry in our models without the "illusion" actually constituting a genuinely authentic experience?
-
FOOL: Novel Optimization Method Impacting AGI Research Progress
By
–
Introducing Fictitious Optimization & Obfuscation Learning. FOOL is a cutting-edge method based on the latest in artificial stupidity & irrational reasoning. Its key advance is to randomly change the direction of gradients during optimization. It may also slow down AGI research.
-
ChatGPT sparks investment frenzy and soul searching in China
By
–
ChatGPT sparks investment frenzy and soul searching in China #AI #RuleoftheRobots
-
Authenticating Humans Against Autonomous Agents Infiltration
By
–
Now that some people are building autonomous agents to infiltrate our lives (who?), perhaps we need a way to authenticate humans. How do we feel about this approach?
-
GPT-4 alignment concerns and AI safety considerations
By
–
BTW, GPT-4 doesn't agree with me entirely, so we are safe (for now)