Speaking as the author of the NYT bestselling book on the danger of superhuman AI: I agree that policy to stop the reckless AI race must be global. An AI does not need to run in an American datacenter to threaten an American life. Hopefully that part can achieve broad support.
Participation in peer review at @NeurIPSConf (or @icmlconf, @iclr_conf, @CVPR, @COLM_conf, etc.) can be considered providing a "service" under U.S. sanctions law. U.S. law generally prohibits providing services to designated sanctioned individuals or entities, including cases where the process is effectively providing a service to a sanctioned institution. Violations can lead to significant fines and compliance overhead; willful violations can carry criminal exposure to the organizers and board members. Note that the "informational materials" exemption (Berman Amendment) likely does not apply here, based on legal advice.
Fraudsters don’t have governance committees or budget cycles. They just act. New research by @TheACFE + SAS asks whether organizations can move fast enough to keep up with #deepfakes & other AI-charged #fraud threats. Spoiler: most can’t yet.
Looking at the newly announced PCAST, a few things stand out: — This group is heavily concentrated in AI and adjacent emerging tech like AI, crypto, semiconductor exports, data centers, etc. It’s also just 2 women and 11 men. — Most members have clear financial interests in the policy outcomes they’ll be advising on. Many are also very friendly to the administration and against regulation. — Members’ companies are among the most prominent tech firms to be targeted with future regulations. — By comparison, PCASTs under Obama and Biden were generally filled with academics, research scientists, university leaders, physicians, and others alongside some corporate execs. Josh Wingrove (@josh_wingrove) Trump announces a "Council of Advisors on Science and Technology" led by Sacks and Kratsios, with members including Ellison, Huang, Zuckerberg, Dell, Brin. — https://nitter.net/josh_wingrove/status/2036795154992926900#m
The shift from AI as a tool to AI as an actor creates massive governance challenges, including cascading errors and unpredictable autonomous behavior. When we stop giving step-by-step instructions and start giving goals, we lose the ability to ensure the path taken is the one we… pic.twitter.com/RlwpGXas94
The shift from AI as a tool to AI as an actor creates massive governance challenges, including cascading errors and unpredictable autonomous behavior. When we stop giving step-by-step instructions and start giving goals, we lose the ability to ensure the path taken is the one we
Arthur Mensch proposes a royalty on AI actors to compensate creators and secure model training in Europe. Understandable intention. But you don't become competitive by making AI more expensive. The real risk is encouraging dominant players to cut off our access. [Translated from EN to English]
Machine superintelligence would extinguish Democrats, Republicans, British, Chinese, scientists, cab drivers, and polar bears. It is a sign of hope that all of those now seem to be saying they'd prefer otherwise (except the polar bears). [Translated from EN to English]
AI will help discover new science, such as cures for diseases, which is perhaps the most important way to increase quality of life long-term. AI will also present new threats to society that we have to address. No company can sufficiently mitigate these on their own; we will need a society-wide response to things like novel bio threats, a massive and fast change to the economy, extremely capable models causing complex emergent effects across society, and more. These are the areas the OpenAI Foundation will initially focus on, and in my opinion are some of the most important ones for us to get right. The Foundation will spend at least $1 billion over the next year. @woj_zaremba, co-founder of OpenAI, will transition to Head of AI Resilience. I believe that shifting how the world thinks about safety to include a Resilience-style approach is critical, and I am extremely grateful to Wojciech for taking on this role. Wojciech has been my cofounder for the last decade; anyone who knows him will understand what I mean when I say he is one of a kind. He has a lot of ideas about how we build a new kind of AI safety. @JacobTref is joining as Head of Life Sciences and Curing Diseases. @annaadeola, our VP of Global Impact, will transition to Head of AI for Civil Society and Philanthropy. @robert_kaiden is joining as Chief Financial Officer. @jeffarnold is joining as Director of Operations.