could ever possibly know. It is at least plausible that human level AGI might initially run in a box instead of an entire data center. Some still hope for quantum magic in the neurons; I think it more likely that they are actually kind of crappy computational elements.
AGI
-
Detecting Major AI Breakthroughs Before They Happen
By
–
a thing about research i didn't get before openai: frequently before a big idea gets figured it out, multiple teams can sort of detect it on radar through the fog. you get an idea of where it's going to be and the rough shape far before anyone actually lays eyes on it.
-
Inspiration from Ajeya Cotra’s Sandwiching Concept in Research
By
–
This paper was heavily inspired by prior work, especially Ajeya Cotra's 'sandwiching' concept:
-
Scalable Oversight: Supervising AI Systems Beyond Human Capabilities
By
–
To ensure that AI systems remain safe as they start to exceed human capabilities, we’ll need to develop techniques for scalable oversight: the problem of supervising systems’ behavior without assuming that the overseer understands the task better than the system being trained.
-
How to Make AI Safe: Key Solutions and Safeguards
By
–
how could you make it safe? with a great answer to that, we'd definitely be open
-
AGI focus: unlocking massive value for startup ecosystem
By
–
we will remain very focused on AGI, but we think there will be a _gigantic_ amount of value unlocked for the world along the way. we want to enable startups to go after it.
-
The Future of AI Turned Out Differently Than Expected
By
–
The future turned out differently than expected. // #AI #deeplearning #turingtest #machinelearning