a transition like this is mostly good, and can happen somewhat fast—the transition from the pre-smartphone world to post-smartphone world is a recent example. but it’ll be tempting to go super quickly, which is frightening—society needs time to adapt to something so big.
REGULATION
-
Early AI Tools Release: Balancing Empowerment and Serious Challenges
By
–
we think showing these tools to the world early, while still somewhat broken, is critical if we are going to have sufficient input and repeated efforts to get it right. the level of individual empowerment coming is wonderful, but not without serious challenges.
-
Generational accountability: understanding future AI ethics impacts
By
–
i wish that all generations would treat previous generations with indulgence. humanity is deeply imperfect. our grandparents did horrible things; our grandchildren will understand that we did horrible things we don’t yet understand.
-
Assembling Expert Panel on Generative AI in Healthcare for SXSW
By
–
I'm putting together a stellar panel discussion #generativeAI in healthcare during @sxsw to explore the risks and opportunities, in front of an industry + fed + gov + tech audience. Who should I invite on the panel? #sxsw2023 #ChatGPT #ai
-
Bank of France obtains GEEIS-IA Inclusive label
By
–
The @banquedefrance obtains the GEEIS-IA Inclusive label https://actuia.com/actualite/la-banque-de-france-obtient-le-label-geeis-ia-inclusive/
… #AI #artificialintelligence -
Balancing AI Development Through Iteration and Societal Input
By
–
this is going to take continual iteration–and lots and lots of societal input–to get right. to find the right balance, we will likely overcorrect several times, and find new edges in the technology. we appreciate the patience and good faith as we get to a better place!
-
AI System Behavior: Reducing Bias, Customization, and Public Input
By
–
our current thoughts on hard questions about how AI systems should behave: 1) less biased defaults, 2) lots of user customization within very broad bounds, 3) public input on bounds and defaults
-
RLHF Training Reduces but Doesn’t Eliminate Racial Discrimination in Admissions
By
–
Finally, we develop a benchmark testing for racial discrimination in LM decision-making in student course admissions. In our control condition (blue) we find more RLHF training produces model outputs that approach demographic parity but still discriminates against Black students.
-
Understanding AI Models Before Critical Applications
By
–
Wow, this is just remarkable. We need to understand these models a lot better before we give them control of anything mission-critical.
-
Karnataka Government Partners on AI for Agricultural Welfare
By
–
We are proud to have partnered with the @Govt_Karnataka Department of #Agriculture towards realising our common objective of improving the welfare of farmers across the state. We will explore opportunities for the responsible use of AI to boost agriculture systems in the state.