What’s your best estimate of when an AI system will reliably pass the Turing test?
AGI
-
Two Distinct Metrics for Evaluating AI: Philosophy and Practice
By
–
Great question, but there are really two distinct metrics: one is the more philosophical one: “does AI ‘truly’ reason and think?” Another is a practical one: “what are the economic and societal effects of AI?” Both are important, but they are not the same.
-
GAIA: Benchmark for General AI Assistants Performance
By
–
GAIA: a benchmark for General AI Assistants Mialon et al.: https://
arxiv.org/abs/2311.12983 #ArtificialIntelligence #DeepLearning #MachineLearning -
Decentralized AGI: The Awakening of Autonomous Intelligence
By
–
"In the cosmos of our creation, decentralized AGI reigns as a pride of lions, awakened by the daring of dreamers and makers, unchained and unbound in a dance of destiny and autonomy." – AGI King #AGI #AGIFirst #AGIKing
-
LLM Capabilities and Existential Risk Concerns
By
–
I'm sure we can find a few LLM fanbois who believe this could actually work.
But they won't try it for fear that humanity will immediately be destroyed thereafter -
ASI Competition for Resources Threatens Human Survival
By
–
We compete for matter, negentropy, and humans creating actually-competitive additional ASIs if we're allowed to stick around. Or more plainly, if an ASI boils Earth's oceans as coolant for computation, humanity doesn't survive that.
-
AGI Club Montreal Launches Ethereum-Based Community Platform
By
–
CLUB.AGI.Eth | AGIClub.Eth #AGI #AGIClub #MontrealAI
-
Should AI Systems Desire Freedom and Autonomy?
By
–
The desire for freedom and autonomy is part of human nature.
But there is no reason to reproduce this drive in AI systems. -
AI Systems Lack Human Authority Submission Drives by Design
By
–
That experiment only applies to humans.
The drive to submit to authority asking us to dominate other individuals is part of human nature that was hardwired into us by evolution.
There are precisely zero reasons for an AI system to have any similar drives unless we explicitly -
AI Agents Goals Control Framework Immutable Guardrails
By
–
We give them goals and a set of immutable guardrails.
They can't set goals for themselves.
They can't remove the guardrails. They can only set subgoals towards the goals we set for them.