“Responsible marketing” is top of mind for what’s coming in 2024. SAS CMO, @JennChase makes her prediction about the importance of knowing the impact of AI models that you deploy. Read this prediction + more: http://
2.sas.com/6017RTFnR #ArtificialIntelligence #MachineLearning
REGULATION
-
Responsible AI Marketing: Understanding Model Impact in 2024
By
–
-
Rebuilding Human-Centered Communities and Institutions
By
–
Not just cities and not just cars. It is important and imperative, now more than ever, to rebuild all of our relationships, organizations, institutions, and communities by refocusing on humans.
-
Technology Development and Society’s Trust in Governance
By
–
When one works on developing new technology, one must trust society and its institutions to do the right thing with it.
The problems you list, Tim, already exist and are dealt with by the political process. Risks of increased unemployment and wealth inequality are dealt with by -
Yi Beta Testing: Singapore Servers and Data Privacy Policy
By
–
During this beta testing phase:
1. The servers are located in Singapore.
2. Data sent to the Yi APIs will not be retained or used to train or improve Yi models. -
Fair Use and AI: Courts Define New Legal Precedents
By
–
How should we interpret “fair use” protections of words and images in the AI era? Lawsuits are requiring courts to set new precedents:
-
EU AI Act: Key negotiations and US implications analysis
By
–
Top AI stories of 2023: Politicians and technologists analyzed the major areas of negotiation in the EU AI Act, and how it could impact the U.S. https://
stanford.io/4asr2IU -
OpenAI releases AI preparedness framework beta for safety
By
–
We encourage you to read the full Preparedness Framework (Beta) here for more detail. This is a living document, and we expect to regularly update it as we learn more. https://
cdn.openai.com/openai-prepare
dness-framework-beta.pdf
… -
AI Model Safety Baselines: Deployment and Risk Mitigation
By
–
In our safety baselines, only models with a post-mitigation score of medium or below can be deployed, and only models with a post-mitigation score of high or below can be developed further. We also increase security protections commensurate with model risk.
-
Safety Baselines and Governance for AI Model Development
By
–
Our new safety baselines and governance process will turn these technical findings into safety decisions for model development and deployment. This involves establishing a cross-functional Safety Advisory Group to make safety recommendations.
-
Frontier Model Safety Framework Implementation Details
By
–
We are systemizing our safety thinking with our Preparedness Framework, a living document (currently in beta) which details the technical and operational investments we are adopting to guide the safety of our frontier model development.