The human touch: ‘Artificial General Intelligence’ is next phase of AI https://
c4isrnet.com/cyber/2022/11/
11/the-human-touch-artificial-general-intelligence-is-next-phase-of-ai/
… @C4ISRNET #AI #MachineLearning #DataScience #BigData #Analytics #Robots #100DaysofCode #IoT #serverless #DEVCommunity #womenwhocode #DigitalTransformation #NLP #Python #DeepLearning
AGI
-
Artificial General Intelligence: The Next Phase of AI Evolution
By
–
-
When We Realize We’re Just Stochastic Parrots
By
–
what happens when we realize we were just stochastic parrots all along?
-
Increased proliferation risk estimate and fast takeoff concerns
By
–
I have increased my estimate of the proliferation risk, which does indirectly increase the risk of fast takeoff, but my constant factor for the danger is still quite low.
-
AGI Parameter Count vs Brain Synapses: Current Models Scale
By
–
A common view is that human level AGI will require a parameter count in the order of magnitude of the brain’s 100 trillion synapses. The large language models and image generators are only about 1/1000 of that, but they already contain more information than a single human
-
Human-level AGI could run in a box, not data centers
By
–
could ever possibly know. It is at least plausible that human level AGI might initially run in a box instead of an entire data center. Some still hope for quantum magic in the neurons; I think it more likely that they are actually kind of crappy computational elements.
-
Detecting Major AI Breakthroughs Before They Happen
By
–
a thing about research i didn't get before openai: frequently before a big idea gets figured it out, multiple teams can sort of detect it on radar through the fog. you get an idea of where it's going to be and the rough shape far before anyone actually lays eyes on it.
-
Inspiration from Ajeya Cotra’s Sandwiching Concept in Research
By
–
This paper was heavily inspired by prior work, especially Ajeya Cotra's 'sandwiching' concept:
-
Scalable Oversight: Supervising AI Systems Beyond Human Capabilities
By
–
To ensure that AI systems remain safe as they start to exceed human capabilities, we’ll need to develop techniques for scalable oversight: the problem of supervising systems’ behavior without assuming that the overseer understands the task better than the system being trained.
-
How to Make AI Safe: Key Solutions and Safeguards
By
–
how could you make it safe? with a great answer to that, we'd definitely be open