Yuandong is one of several folks who have been working on planning at FAIR.
He explains the difference in applicability between A* (search for shortest path in a graph) and MCTS (search in an exponentially growing tree).
AGI
-
A* vs MCTS: Planning Algorithms Explained by FAIR Researcher
By
–
-
Yuandong’s Planning Research Approaches at FAIR
By
–
Exactly.
Yuandong has been working on various approaches to planning at FAIR. -
Building and Training World Models for AI Systems
By
–
Surely. The question is how to build and train this world model.
-
Brian Greene AGI Debate AI Safety Interview World Science Festival
By
–
of my public interview with Brian Greene at the World Science Festival in NYC a few weeks back.
It is followed by a debate about "AGI" with @SebastienBubeck It ends with a debate about AI safety with Sébastien and Tristan Harris.
I find it strange that Tristan uses the -
Questioning AI Progress Claims: Closer Than Believed?
By
–
How could I convince you that the answer to (a) is "near, perhaps, but not as near as you might have been led to believe", and that the answer to (b) is "no"?
-
AGI Timeline Predictions: 2015 Optimism Proven Wrong
By
–
The self-delusion was to claim in 2015 that AGI was just around the corner and that a non-profit organization was going to reach it alone before anyone else.
That all turned out to be wrong.
And we're still some ways away from human-level AI.
Generations after generations of AI -
Superhuman AI Safety: Blueprint and Historical Parallels
By
–
But then again, if we don't have a blueprint for superhuman AI, discussing how to make them safe is like discussing the sex of angels.
Or rather, like discussing how to make turbojets safe and reliable circa 1920.
Turbojets are unbelievably safe and reliable.
It was a difficult -
AI Doom and Hype: A Symbiotic Relationship
By
–
It's a symbiotic relationship: AI doomers need AI hype in order to be taken seriously and be part of the discourse, and AI hypers need AI doom narratives for AI to look momentous and world-altering.
-
Should AI Weight Known False Evidence in Reasoning?
By
–
Do you give 10% weight to a proof that you know is wrong?