AI safety researchers who are leaving OpenAI because they believe safety is not getting enough attention and resources inside OpenAI suggest they think they’ll be more influential outside. I don’t understand their motives, so let’s break it down: 1) If they stay in OpenAI:
–
AGI
-
AI Safety Researchers Leaving OpenAI: Motivation Analysis
By
–
-
OpenAI Dissolves Superalignment Team Tackling AI Dangers
By
–
My latest for @WIRED
: how OpenAI’s superalignment team, designed to tackle the dangers of supersmart AI, has been disbanded: -
The Future: AI as Cake, Humanity as Raisins
By
–
The future is a raisin cake, where AI is the cake and people are the raisins.
-
AI as Final Copernican Revolution: Intelligence Not Special
By
–
AI will be the final Copernican revolution, showing that not even our intelligence is special.
-
Future of Autonomous Agents Livestream
By
–
my first livestream on @X – over 1200 joined!
— Yohei (@yoheinakajima) 16 mai 2024
"Future of Autonomous Agents"
recording is available here: https://t.co/y9FBhIkke6my first livestream on @X – over 1200 joined! "Future of Autonomous Agents" recording is available here:
-
Energy Costs Must Drop Before AGI Becomes Unmeasurable
By
–
notice that we still meter energy, so a nice side effect of @sama
’s stated worldview is that he -must- get energy too cheap to meter first before intelligence can be TCTM. second order implication is also stunning – as energy becomes too cheap to meter, metering also becomes -
Self-Awareness: Essential Criterion for Human-Level Intelligence
By
–
Yeah, I think self awareness is required… without some kind of introspection and self thought it’s hard to call it human level intelligence
-
AGI Builders Have Internal Definitions of Artificial General Intelligence
By
–
I think people who are building AGI have their internal definitions.
-
Focus on Building AGI Rather Than Defining It
By
–
People should simply work on building AGI and not "delve" into its definitions.
-
Reframing AI Competition Beyond Military Rhetoric
By
–
I don’t like this “we are in a war” sentiment It’s also a dangerous one, it shifts the overtone window How about we are competing but ultimately we share some values?