10 reasons to worry about generative #AI https://
bit.ly/3lC3L2j
REGULATION
-
10 Reasons to Worry About Generative AI
By
–
-
International AI Governance Framework: IAEA Model for Advanced AI
By
–
something like an IAEA for advanced AI is worth considering, and the shape of the tech may make it feasible: https://
openai.com/blog/governanc
e-of-superintelligence
… (and to make this harder to willfully misinterpret: it's important that any such regulation not constrain AI below a high capability threshold) -
International Oversight Organization for Superintelligence Governance
By
–
Initial ideas for governance of superintelligence, including forming an international oversight organization for future AI systems much more capable than any today:
-
OpenAI World Tour: Building and Policy Engagement Across Continents
By
–
had a great first week of the openai world tour in toronto, DC, rio, lagos, and lisbon. fun to see what people are building and get (lots of) feature requests, and even fun to talk to policymakers! madrid, warsaw, paris, london, and munich this week.
-
Bureaucracies Cannot Distinguish Quality Alignment Research Papers
By
–
I don't think that does it, because the bureaucracies doing the funding wouldn't know how to distinguish good alignment papers from bad alignment papers. It's possible that some progress could be made on AI interpretability this way; but I don't think that's enough.
-
Harrison Ford defends science importance for humanity future
By
–
Extraordinaire interview de Harrison Ford par @LaurentDelahous sur France 2.
— Rafik Smati (@RafikSmati) 21 mai 2023
« Notre futur dépend de la science »
« La science n’est pas une opinion »
« Le dénigrement de la science ne sert pas notre survie »
Un seul mot : BRAVOpic.twitter.com/uqDT34s8Z1Extraordinaire interview de Harrison Ford par @LaurentDelahous sur France 2. « Notre futur dépend de la science » « La science n’est pas une opinion » « Le dénigrement de la science ne sert pas notre survie » Un seul mot : BRAVO
-
Big Tech Cuts AI Ethics Staff Threatening System Safety
By
–
AI ethics is under threat as big tech companies cut staff and teams working on this crucial issue. How will this affect the safety and trustworthiness of AI systems? Read this article by @FT to find out: https://
ft.com/content/263722
87-6fb3-457b-9e9c-f722027f36b3
… #AI #Ethics #Safety #Trust #aiethics -
ChatGPT-4 Five Times Smarter Than ChatGPT-3, Tech Leaders Call for Pause
By
–
Chat GPT4 Is 5X Smarter Than Chat GPT3: Tech Icons Launch Petition to Pause
#AI #AIio #BigData #ML #NLU #Futureofwork @gp_pulipaka @stratorob @PetiotEric @EvanKirstel @Fgraillot @HaroldSinnott @HeinzVHoenen @helene_wpli http://
ow.ly/bwQt30svr0p -
Regulations Should Mandate Openness in AI Infrastructure
By
–
If anything, the regulations should dictate openness and transparency of these underlying infrastructure – this VentureBeat article covers this discussion https://
venturebeat-com.cdn.ampproject.org/c/s/venturebea
t.com/ai/meta-and-google-news-adds-fuel-to-the-open-source-ai-fire/amp/
… -
AI Regulation Risks: Future Control by Few Amid Lack of Understanding
By
–
May be there is the right intent, but the risk of the future being closed and controlled by a few, aided by the regulations drafted by folks (who don't understand AI anyways) is very evident!