We need to put people back into the heart of our conversations around AI. Thank you @ConnectedByData for organising this letter.
REGULATION
-
SNJ Obtains Postponement of ChatGPT Experiment in Newsrooms
By
–
The SNJ obtains the postponement of the ChatGPT experiment within the newsrooms of "L'Est républicain" and "Vosges Matin"
-
AI Risk Alarmism vs Economic Pragmatism and Market Forces
By
–
Extreme AI risk alarmism will fail just as extreme climate alarmism never reached its goals, aside from a few absurd EU regulations. People choose a pragmatic middle ground, and greed and economic incentives always prevail.
-
Ethical AI Use in Digital Content Creation and Manipulation
By
–
But with enhanced digital content creation and manipulation, we must ensure the ethically responsible use of AI technology. Remember, science informs us what we can do, not what we ought to.
-
OpenAI Charter: AGI deployment commitment for universal benefit
By
–
Hey Joscha, it’s worth spending some time reading our Charter: https://
openai.com/charter “We commit to use any influence we obtain over AGI’s deployment to ensure it is used for the benefit of all…” -
AI Existential Risk and Open Source AI Debate
By
–
Another comment to a tweet from @tegmark asking me (again) why I think AI won't kill us all, and claiming that the question of existential risk is disconnected from the question of open source AI.
-
AI Safety and Existential Risks Without Proper Safeguards
By
–
Are rocket engines capable of doom or not?
In some stupid scenarios, they are.
Build thousands of nuclear intercontinental ballistic missiles, and launch them. Is AI capable of doom or not? In some stupid scenarios, they are.
Build powerful AI systems without guardrails to make -
Jaan Talinn funds major AI existential risk research institutes
By
–
Max Tegmark is the President of the Board of the Future of Life Institute, which is bankrolled by billionaire and Skype and Kazaa co-founder Jaan Talinn.
Talinn also bankrolls the Cambridge Centre for the Study of Existential Risk, the Future of Humanity Institute, the Global -
Open Source AI Safety Risks and Regulatory Concerns
By
–
The reason that (2) is in question and open source AI R&D is threatened is (1). Those questions are not disconnected. If you ask "give me arguments for why turbojets and rocket engines won't kill people", I could respond by pointing to a dozen treatises on how to design and
-
Could Open Source SkyNet Have Prevented Terminator Takeover?
By
–
Since many AI doom scenarios sound like science fiction, let me ask this:
Could the SkyNet take-over in Terminator have happened if SkyNet had been open source?