i agree on being close to dangerously strong AI in the sense of an AI that poses e.g. a huge cybersecurity risk. and i think we could get to real AGI in the next decade, so we have to take the risk of that extremely seriously too.
REGULATION
-
AI and Robotics Transform Workplace: What Policies Are Needed?
By
–
What kind of policies are needed in a world where artificial intelligence has transformed the workplace? Good points here on #AI, robotics, and the future of work. / #futureofwork #IoT #podcast #Robotics #MachineLearning #ML @TheEconomist #technology #edtech #HR #SaturdayVibes pic.twitter.com/oe9TdUWkWv
— Sean Gardner (@2morrowknight) 3 décembre 2022What kind of policies are needed in a world where artificial intelligence has transformed the workplace? Good points here on #AI, robotics, and the future of work. / #futureofwork #IoT #podcast #Robotics #MachineLearning #ML @TheEconomist #technology #edtech #HR #SaturdayVibes
-
OpenAI’s selective AI release strategy prioritizes safety considerations
By
–
there are a lot of things we don't release or wait to release for this reason; just because we decided that ChatGPT was safe to release doesn't mean we decide that for everything, or that we will come to the same decision for future systems
-
AI Value Alignment: Users vs Creators Intent Debate
By
–
interesting watching people start to debate whether powerful AI systems should behave in the way users want or their creators intend. the question of whose values we align these systems to will be one of the most important debates society ever has.
-
Democratizing AI Access and Enabling User Customization
By
–
it will take us some time, but we will continue to push to democratize access to this technology, and also to figure out how to make it so that it behaves in the way that individual users want (within some very broad bounds)
-
Questioning Exclusive AI Knowledge and Decision-Making Authority
By
–
i am extremely skeptical of people who think only their in-group should get to know about the current state of the art because of concerns about safety, or that they are the only group capable of making great decisions about such a powerful technology.
-
Iterative Deployment as Safe Path for AI Integration
By
–
iterative deployment is, imo, the only safe path and the only way for people, society, and institutions to have time to update and internalize what this all means.
-
Military and surveillance uses permitted in OpenRAIL-M license
By
–
Oh, that's true. I see military uses and surveillance are allowed with OpenRAIL-M. Funny, most of the motivation text for that initiative was rooted in those two things; seems they were dropped?
-
RAIL Licenses: Regulatory Misalignment and Enforcement Concerns
By
–
To people who wrote the RAIL licenses, do you realize you're not on the same page as regulators? What thoughts went in to the licenses in anticipation of them opposing what Europe already has consensus on? What chance is there to enforce non-military uses in court? @Carlos_MFerr
-
European AI Act excludes military and government surveillance uses
By
–
Two big motivations for Responsible AI Licenses (RAIL) are 1. military uses of advanced weaponry and 2. government spying on people. Guess which two things are — coincidentally — explicitly excluded from the European AI Act? (I'm open to your interpretations why this is!)