A #NIS2 Special! Within the ever changing #Compliance landscape understanding the implications for businesses and verticals is critical! Original article http://
bit.ly/DeepDiveNIS2Co
mpliance
… As exemplified by the NIS2 Directive – a key piece of European Union #legislation
REGULATION
-
NIS2 Directive Compliance Implications for Businesses and Sectors
By
–
-
Shreya Leads AI Ethics Policy Design for Underserved Communities
By
–
For the year 2023, Shreya is part of a multidisciplinary cohort of women making important contributions to this critical space. Shreya leads key conversations around policy design on AI ethics with our stakeholders and partners to develop AI products for underserved communities.
-
Ethical Review Process for AI in Medicine Emphasized
By
–
Scholars including HAI faculty affiliates @MichelleM_Mello and @drnigam underscore the importance of an ethical review process of AI in medicine in a commentary on @JAMA_current
. @StanfordHP @StanfordMed -
Tech Companies’ AI Ethics Gap: Policy vs Practice Analysis
By
–
New policy brief: Tech companies often “talk the talk” of AI ethics without fully “walking the walk.” Our empirical investigation into AI ethics on the ground highlights the stark gap between company policy and practice in this field. @sannasideup https://
hai.stanford.edu/policy-brief-w
alking-walk-ai-ethics-technology-companies
… -
AI Executive Order: Workforce Gaps Challenge Implementation Progress
By
–
Workforce gaps remain one of the biggest challenges in implementing the @POTUS AI executive order. Among the requirements, workforce policy generated the most to-do items, according to a new tracker by @stanfordhai
, @StanfordCRFM
, and RegLab researchers. -
Standards and Benchmarks for AI Safety and Innovation
By
–
it connects the Safety & Responsibility crowd with the Model/Data Innovation crowd, and tries to establish standards and benchmarks that these two sets of crowds can agree are good.
Think of it as establishing a standard that you can subscribe to if it benefits your cause — -
Commercializing Superior LLM Models: Safety Certification Strategy
By
–
You've created a superior llama/mistral-derivative model (like @teknium often does).
How can you convince the world to use it (and pay you)? Step 1: You need a 3rd party to approve that this model is safe and responsible.
the Purple Llama project starts to bridge this gap! -
No Single Company Should Control AI Development
By
–
No single company should own and control our AI.
-
ANITI and Ekitia publish results of their citizen survey on AI
By
–
[#Article]
@ANITI_Toulouse and @Ekitia_ publish the results of their citizen survey on #AI https://actuia.com/actualite/aniti-et-ekitia-publient-les-resultats-de-leur-enquete-citoyenne-sur-lia/
… #artificialintelligence -
Sakana AI joins Alliance for open AI development approach
By
–
Sakana AI is proud to be a founding member of the AI Alliance, alongside University of Tokyo, Sony and others in Japan. As we’ve seen, the real ‘danger’ of AI is for it to be developed and owned by a single company. An open approach makes much more sense. https://
thealliance.ai