Anyone who thinks Auto-Regressive LLMs are getting close to human-level AI, or merely need to be scaled up to get there, *must* read this. AR-LLMs have very limited reasoning and planning abilities.
This will not be fixed by making them bigger and training them on more data.
AGI
-
Auto-Regressive LLMs Limited Reasoning Cannot Be Fixed by Scaling
By
–
-
Tech Company Makes AI Look Scarier Than Necessary
By
–
They really made this thing look as frightening as possible
-
OpenAI Announces New Head of Roofing Role
By
–
Thrilled to start my new position as Head of Roofing @OpenAI
! We will get to AGI, one shingle at a time. -
AI Regulation Risk: Preventing Development Delays Threatens Humanity
By
–
We must prevent AI risk alarmists from capturing the regulatory discussion. Artificially delayed or stopped AI development is an existential risk to humanity!
-
Semantic Knowledge and Language: Challenging the Propositional Picture
By
–
A new paper by @Jake_Browning00 and me that just appeared in Artificial Intelligence.
It discusses the (in)validity of the "propositional picture of semantic knowledge" according to which all knowledge is expressible in language. "Cognitive scientists and AI researchers now https://
x.com/Jake_Browning0
/Jake_Browning00/status/1715079913978429914
… -
Sakana AI presents nature-inspired intelligence at NTT R&D Forum
By
–
We’re excited and honoured to be invited to give a talk at NTT R&D Forum 2023 about @SakanaAILabs
, and discuss our views on nature inspired intelligence and a new paradigm for foundation models! More info → https://
rd.ntt/forum/2023/lec
ture.html
… -
Bet on AI risk probability increase against LeCun
By
–
I just bet M$10 on 4%->8%, I think LeCun is obviously wrong but not 25:1 worth of wrong.
-
Commerce Department explores controlling frontier AI models from Beijing
By
–
“Over the past few months, members of Commerce have met with experts to hash out what controlling frontier models could look like and whether it would be feasible to keep them out of reach of Beijing.”
-
Will AGI/ASI Make Humans Superfluous by NeurIPS 2023?
By
–
Is anyone going to #NeurIPS2023, or will we have AGI/ASI already by then thus rendering humans superfluous?
-
Balancing immediate and existential AI risks concerns
By
–
Interesting, thanks for sharing. I think one issue is that many people who care about immediate risks also worry about existential ones, just not as much as the letter sort of implied.