Yeah, glad we have a player focused on privacy. Just a much harder path to results
REGULATION
-
Partnership on AI publishes foundation models safety guidelines
By
–
On This Day 6 years ago: the first general meeting of the Partnership on AI took place in Berlin. PAI funds studies and publishes guidelines on questions of AI ethics and safety.
It just published a set of guidelines for the safe deployment of foundation models: -
Frontier Model Forum Launches AI Safety Initiative with Tech Giants
By
–
The Frontier Model Forum is an industry body co-founded with @AnthropicAI
, @Google and @Microsoft
. We will focus on three key areas within AI safety over the next year: – Identifying best practices
– Advancing AI safety research
– Facilitating information sharing among companies -
Frontier Model Forum Appoints New Director and Launches AI Safety Fund
By
–
Today, we are announcing Chris Meserole as the Executive Director of the Frontier Model Forum, and the creation of a new AI Safety Fund, a $10 million initiative to promote research in the field of AI safety.
-
Scientific Basis for AGI Risk and Regulatory Challenges
By
–
(Unlike nuclear power)..”there exists no scientific basis or evidence for how or when AGI will emerge (if ever), leaving us with only a hypothetical risk that has capitulated many of our regulatory efforts today. What we are seeing from current AI leaders on their comparison to
-
AI Labs Must Complete Nuclear Safety Analogy Seriously
By
–
"If AI labs are to consistently invoke exaggerated fears through comparisons to nuclear hazards, then they must be willing to take the analogy to completion. Ironically, if they were to explore the readily available safety mechanisms for nuclear components,…
-
AI’s Role in Society: Generative AI Panel Discussion
By
–
Our last panel of the day focuses on envisioning AI’s place in society. Join @RishiBommasani
, @erikbryn
, @microsoft
’s Jaron Lanier, @joon_s_pk
, & @MIT
’s Ashia Wilson for a wide-ranging discussion at the HAI Fall Conference on new horizons in generative AI: https://
stanford.io/46NtzuT -
Governance frameworks for AI agents: lessons from human entities
By
–
We already handle this with humans, groups, corporations, and governments.
-
Aligning AI with Common Good Easier Than Nature Modification
By
–
It is much, much easier to align AI with the Common Good, than it is for children and animals.
the reason is that we can't "hack" human nature. We can only modify it through education.
We can "hack" animal nature through selective breeding (which is pretty brutal).
We can -
Superintelligent Sociopaths: Existential Risk Assessment
By
–
– The proportion of sociopaths in society is way larger than 1 in a few million.
– If society is robust to sociopaths, why can't it be robust to artificial sociopaths?
– What makes you think that we will actually build super-intelligent sociopaths?
– We have super-intelligent