LLM‘s. It will revolutionize everything, not just a aspect of life.
AGI
-
Human Intelligence Approaching Artificial Intelligence Capabilities
By
–
Wow, is this a joke?
— Satya Mallick (@LearnOpenCV) 7 novembre 2023
Is human intelligence approaching artificial intelligence? https://t.co/fS83ULTek0Wow, is this a joke? Is human intelligence approaching artificial intelligence?
-
Defining AGI: Frameworks for Understanding Artificial General Intelligence
By
–
So much talk about AGI, but very little agreement on exactly what it is or how to know when it’s achieved. Hopefully this paper gives frameworks and clarity to the conversations.
-
Beyond Auto-Regressive: Future AI Architecture Requirements
By
–
Ever.
Whatever system will be able to do this will not be an Auto-Regressive LLM. -
Why Hard Tech Breakthroughs Always Hit Walls First
By
–
The phrase "whole career" is particularly accurate. 1. Pick a difficult technology area. Anything: Flying cars, quantum computing, interstellar travel, nuclear fusion, hypersonic flight, AI, whatever.
2. Complain that "current approaches are hitting a wall and will never work." -
Tacit Knowledge: The Unwritten Foundation of Human Animal Learning
By
–
Everything we learn through apprenticeship or experience is essentially not written down.
That's most of human knowledge and all of animal knowledge. -
HAI Vodcast Explores What We Really Want From AI
By
–
ICYMI: In the first of a three-part HAI vodcast, @gewang and @vanessaparli touched on several big ideas, including what we really want from AI. https://t.co/iEQj2WH0OQ pic.twitter.com/zfyMiBDUla
— Stanford HAI (@StanfordHAI) 6 novembre 2023ICYMI: In the first of a three-part HAI vodcast, @gewang and @vanessaparli touched on several big ideas, including what we really want from AI. https://
bit.ly/475Nqph -
Human and Animal Knowledge Beyond Written Language
By
–
I have argued that most of human knowledge and essentially all of animal knowledge isn't written down anywhere, nor even expressible in human language.
-
Billionaire Funding for AI Extinction Risk Research Causes Unintended Damage
By
–
There is some "academic money", but fairly small amounts.
Most of the money comes from a few billionaires worried about extinction risks.
Much of it goes to private non-profits, not universities.
They have good intentions but are unwittingly causing a lot of damage.