Last week, HAI Faculty Affiliate @sanmikoyejo headed to DC to meet with @corybooker
, @RepDonBeyer
, @RepYvetteClarke
, @JayObernolte and several other policymakers to discuss his research on the mirage of emergent abilities and model evaluation. (1/3)
REGULATION
-
HAI researcher discusses AI model evaluation with US policymakers
By
–
-
Funders’ Guide to Preparing Society for Generative AI
By
–
Philanthropy & generative AI: @SSIReview offers 10 things funders can do to help the existing field of tech-related nonprofits—and society at large—better prepare. https://
ssir.org/articles/entry
/10_ways_funders_can_address_generative_ai_now
… (1/2) -
IRS Audit Bias Against Black Taxpayers Study Findings
By
–
Read about the research: https://
hai.stanford.edu/news/irs-dispr
oportionately-audits-black-taxpayers
… -
AI Research Reveals IRS Racial Audit Disparities
By
–
From research to real-world impact: HAI Faculty Dan Ho’s collaborative study revealing racial disparities in IRS audits led the agency to overhaul how it scrutinizes low-income Americans.
-
EU AI Act: Supporting Grassroots Open Model Innovation
By
–
AI has deep roots in Europe, and we are committed to supporting developer and researcher communities across Europe. As the EU enters the home stretch of #AIAct deliberations, we urge the EU to protect and promote grassroots innovation in open models
-
Stanford researchers advocate coordinated AI safety in biomedicine
By
–
How do we ensure AI in biomedicine remains safe and equitable? Stanford researchers say top-down coordination is necessary between regulatory agencies and institutions to develop a comprehensive nationwide plan.
-
FTC jurisdiction over deceptive AI advertising and marketing practices
By
–
Deceptive advertising and marketing is within the jurisdiction of the FTC isn't it? And they wrote a blog post about it recently in relation to "AI"
-
ChatGPT Browsing Updates: robots.txt Support and User Agent Identification
By
–
Since the original launch of browsing in May, we received useful feedback. Updates include following robots.txt and identifying user agents so sites can control how ChatGPT interacts with them.
-
AI companies spreading misinformation while avoiding accountability
By
–
Are you telling me that these people can go around saying this dangerous shit, deceiving people and leading them to use these products in harmful ways, while yelling about "existential risk please regulate us but not like that" and face ZERO consequences?
-
Government, AI Scientists Collaborate for Safe, Trustworthy AI
By
–
These people are members of governments working in AI and Digital directives and AI scientists and practitioners. Can you tell the difference? There is none. At @GPAI_PMIA innovation workshop at @CEIMIA_mtl we work TOGETHER to make AI a force for good, safe and trustworthy