In this @Cognilytica #AIToday AI Glossary Series #podcast episode 'Artificial General Intelligence (AGI), Strong AI, Weak AI, Narrow AI' hosts @rschmelzer & @kath0134 share the definitions for AGI/Strong AI and Narrow/Weak AI. Full episode: https://
cognilytica.com/2022/11/11/ai-
today-podcast-ai-glossary-series-artificial-general-intelligence-agi-strong-ai-weak-ai-narrow-ai/?utm_source=dlvr.it&utm_medium=twitter
… #tech #AGI #AI
AGI
-
AI Today Podcast: AGI, Strong AI, Narrow AI Definitions
By
–
-
Lex Fridman: Top AI Podcast with Major Industry Guests
By
–
Lex Friedman: One of the top AI podcasts on YouTube. Almost every big name in AI has been on this podcast. Again one of my favourites. Check this out @lexfridman
-
When We Realize We’re Just Stochastic Parrots
By
–
what happens when we realize we were just stochastic parrots all along?
-
Increased proliferation risk estimate and fast takeoff concerns
By
–
I have increased my estimate of the proliferation risk, which does indirectly increase the risk of fast takeoff, but my constant factor for the danger is still quite low.
-
AGI Parameter Count vs Brain Synapses: Current Models Scale
By
–
A common view is that human level AGI will require a parameter count in the order of magnitude of the brain’s 100 trillion synapses. The large language models and image generators are only about 1/1000 of that, but they already contain more information than a single human
-
Human-level AGI could run in a box, not data centers
By
–
could ever possibly know. It is at least plausible that human level AGI might initially run in a box instead of an entire data center. Some still hope for quantum magic in the neurons; I think it more likely that they are actually kind of crappy computational elements.
-
Detecting Major AI Breakthroughs Before They Happen
By
–
a thing about research i didn't get before openai: frequently before a big idea gets figured it out, multiple teams can sort of detect it on radar through the fog. you get an idea of where it's going to be and the rough shape far before anyone actually lays eyes on it.
-
Inspiration from Ajeya Cotra’s Sandwiching Concept in Research
By
–
This paper was heavily inspired by prior work, especially Ajeya Cotra's 'sandwiching' concept:
-
Scalable Oversight: Supervising AI Systems Beyond Human Capabilities
By
–
To ensure that AI systems remain safe as they start to exceed human capabilities, we’ll need to develop techniques for scalable oversight: the problem of supervising systems’ behavior without assuming that the overseer understands the task better than the system being trained.