tl;dr: don't believe the mainstream media when they tell you that their AI writers are being checked by humans before going straight to publication https://
futurism.com/msn-is-publish
ing-more-fake-news
…
REGULATION
-
Mainstream Media AI Writers Lack Adequate Human Fact-Checking
By
–
-
Automation over Augmentation: Risks of Prioritizing HLAI
By
–
5/ While both types of AI can be beneficial, there are currently excess incentives for automation rather than augmentation among technologists, business executives, and policymakers. We must be mindful of the implications of our focus on HLAI.
-
The Future of AI and Its Ethical Implications
By
–
The Future of AI and Its Ethical Implications. #BigData #Analytics #DataScience #AI #MachineLearning #IoT #IIoT #Python #RStats #TensorFlow #JavaScript #ReactJS #CloudComputing #Serverless #DataScientist #Linux #Programming #Coding #100DaysofCode https://
geni.us/AI-Ethical -
AGI Risk and Cybersecurity Threats Within Next Decade
By
–
i agree on being close to dangerously strong AI in the sense of an AI that poses e.g. a huge cybersecurity risk. and i think we could get to real AGI in the next decade, so we have to take the risk of that extremely seriously too.
-
AI and Robotics Transform Workplace: What Policies Are Needed?
By
–
What kind of policies are needed in a world where artificial intelligence has transformed the workplace? Good points here on #AI, robotics, and the future of work. / #futureofwork #IoT #podcast #Robotics #MachineLearning #ML @TheEconomist #technology #edtech #HR #SaturdayVibes pic.twitter.com/oe9TdUWkWv
— Sean Gardner (@2morrowknight) 3 décembre 2022What kind of policies are needed in a world where artificial intelligence has transformed the workplace? Good points here on #AI, robotics, and the future of work. / #futureofwork #IoT #podcast #Robotics #MachineLearning #ML @TheEconomist #technology #edtech #HR #SaturdayVibes
-
OpenAI’s selective AI release strategy prioritizes safety considerations
By
–
there are a lot of things we don't release or wait to release for this reason; just because we decided that ChatGPT was safe to release doesn't mean we decide that for everything, or that we will come to the same decision for future systems
-
AI Value Alignment: Users vs Creators Intent Debate
By
–
interesting watching people start to debate whether powerful AI systems should behave in the way users want or their creators intend. the question of whose values we align these systems to will be one of the most important debates society ever has.
-
Democratizing AI Access and Enabling User Customization
By
–
it will take us some time, but we will continue to push to democratize access to this technology, and also to figure out how to make it so that it behaves in the way that individual users want (within some very broad bounds)
-
Questioning Exclusive AI Knowledge and Decision-Making Authority
By
–
i am extremely skeptical of people who think only their in-group should get to know about the current state of the art because of concerns about safety, or that they are the only group capable of making great decisions about such a powerful technology.
-
Iterative Deployment as Safe Path for AI Integration
By
–
iterative deployment is, imo, the only safe path and the only way for people, society, and institutions to have time to update and internalize what this all means.