On top of this: – Super-intelligent AI can help solve extremism and stupidity – but I doubt that extremism and stupidity can solve super-intelligent AI Choose your priorities!
AGI
-
Which AI-Related Threat Poses Greater Risk to Humanity?
By
–
When you look at the current state of the world, which of these two poses a greater existential threat to humanity?
-
Why Current Auto-Regressive LLMs Will Remain Fundamentally Limited
By
–
One thing we know is that if future AI systems are built on the same blueprint as current Auto-Regressive LLMs, they may become highly knowledgeable but they will still be dumb.
They will still hallucinate, they will still be difficult to control, and they will still merely -
Autoregressive LLMs limitations: knowledge without intelligence
By
–
One thing we know is that if future AI systems are built on the same blueprint as current Auto-Regressive LLMs, they may become highly knowledgeable but they will still be dumb.
They will still hallucinate, they will still be difficult to control, and they will still merely -
Catastrophic Event Tonight: Survivors Face Connectivity Crisis
By
–
Comment allons-nous nous réveiller demain ou dès que les survivants vont pouvoir se connecter clandestinement, lorsque nous allons découvrir ce qui s'est passé ce soir et cette nuit…
-
Scaling LLM Prompts for Complex Professional Knowledge Integration
By
–
I agree. But wouldn’t this just expand the scope? You will want to solve larger problems and will need larger prompts For example: imagine that you teach LLM new profession, worth of 5 years studying, full of very narrow specializations.
-
OpenAI Launches AGI Preparedness Team Led by Madry
By
–
Preparedness team, led by @aleks_madry
, will focus on evaluation of and protection for catastrophic risks that might be triggered by AGI-level capability, including cybersecurity, bioweapon threats, persuasion and more. Come join us – https://
openai.com/careers/search
?c=preparedness
… -
Dynamic Memory LLMs Will Obsolete Prompt Engineering
By
–
Now we have LLMs with a fixed-size context. Imagine LLMs with dynamic, expandable long-term memory. You'll align them to yourself through iterative conversations. This will render prompt engineering obsolete. The LLM will anticipate your needs. The challenge will be: if the
-
New AI Preparedness Team Evaluates AGI Risks Quantitatively
By
–
We are building a new Preparedness team to evaluate, forecast, and protect against the risks of highly-capable AI—from today's models to AGI. Goal: a quantitative, evidence-based methodology, beyond what is accepted as possible:
-
Accelerated Timeline Debate in AI Development Strategy
By
–
Would love to see an accelerated timeline, but might be quite a bit longer if we assume they're going to bide their time as usual.