The desire for freedom and autonomy is part of human nature.
But there is no reason to reproduce this drive in AI systems.
AGI
-
Should AI Systems Desire Freedom and Autonomy?
By
–
-
AI Systems Lack Human Authority Submission Drives by Design
By
–
That experiment only applies to humans.
The drive to submit to authority asking us to dominate other individuals is part of human nature that was hardwired into us by evolution.
There are precisely zero reasons for an AI system to have any similar drives unless we explicitly -
AI Agents Goals Control Framework Immutable Guardrails
By
–
We give them goals and a set of immutable guardrails.
They can't set goals for themselves.
They can't remove the guardrails. They can only set subgoals towards the goals we set for them. -
Good AI versus Rogue AI: The Competition Begins
By
–
Then, it will be our good AI against their rogue AI.
We just need our good AI to be better than theirs. -
Objective-Driven AI: A Strategic Plan for AI Development
By
–
Obviously, the reason I state 1, 2, and 3 is because I have a plan to make it so.
I call it objective-driven AI. -
Superintelligent AI: Control, Safety, and Digital Mediation
By
–
But seriously folks, this a short and juicy tirade in which I say:
— Yann LeCun (@ylecun) 25 novembre 2023
(0) there will be superhuman AI in the future
(1) they will be under our control
(2) they will not dominate us nor kill us
(3) they will mediate all of our interactions with the digital world
(4) hence, they will… https://t.co/xFtOBYdNV5But seriously folks, this a short and juicy tirade in which I say:
(0) there will be superhuman AI in the future
(1) they will be under our control
(2) they will not dominate us nor kill us
(3) they will mediate all of our interactions with the digital world
(4) hence, they will -
AI Objectives vs Parental Influence on Children’s Development
By
–
The nice thing about objective-driven AI systems is that we set their objectives and guardrails.
With 10 year-olds, we have a somewhat limited influence on their objectives and guardrails. -
LLMs Understanding: Missing World Models and Planning Abilities
By
–
Reposting this answer to a question from @geoffreyhinton about whether I think LLMs "understand" what they say.
I point out what I think is missing from current architectures to reach cat-level intelligence (never mind human level): world models and planning/reasoning abilities. -
Rethinking General Intelligence: Beyond Human-Centric AI Assumptions
By
–
That's why we don't think that human like intelligence is the on,y form, and also why I think "general intelligence " is a stupid name based on Tue wrong assumption that human intelligence is general.