NEW PODCAST! Want to understand AI with deeper context? We've launched a pod series on how AI is trained to interpret the world. Each ep has a research theme: from the social, legal, cultural, environmental & political impacts of generative AI. https://
knowingmachines.org/podcast
REGULATION
-
New podcast series exploring AI training and its societal impacts
By
–
-
Data Collection and Ad Targeting: Privacy Protection Explained
By
–
Whatever information is collected is jealously guarded and not sold to anyone.
It's used internally to target ads, but advertisers don't get any user information (unless the user chooses to give it).
You really have no idea what you are talking about. -
Stanford analyzes copyright risks in foundation models
By
–
Stanford scholars analyzed the U.S. case law to identify the potential risks of foundation models developed using copyrighted content and discussed legal and technical strategies to help reduce the risk of copyright violations. Read the policy brief: https://
stanford.io/3szn2oP -
Balancing Fair AI Regulation Against Extremist Viewpoints
By
–
I agree. Some regulation is fair and necessary. It’s just the extremist view I worry about.
-
Biden Xi pledge landmark AI autonomous weapons ban agreement
By
–
A very promising first step on regulating AI in autonomous weapons. This would be a big deal if it materializes this week… “In a landmark agreement set to be announced at Wednesday’s much-awaited bilateral meeting, Presidents Joe Biden and Xi Jinping are poised to pledge a ban
-
West Divided: Europe’s Regulation-First vs North America’s Deploy-First AI Strategy
By
–
Discussions in the West contrast the philosophies of regulation before innovation, preferred in Europe, versus a deployment-first-then-adjust approach favored in North America.
-
Auxane proposes balanced approach combining pre-regulation frameworks and standards
By
–
Auxane suggests a blend of both, starting with pre-regulation frameworks, and adapting with certifications and standards as required.
-
Responsible AI and AI Governance: Definitions and Framework
By
–
Auxane explains 'Responsible AI' as an AI conforming to cultural, regulatory, social, and environmental norms, while 'AI Governance' is the actionable guide towards achieving it.
-
AI Accountability: Responsibility Through Practice, Not Legal Liability
By
–
AI, not being an individual or legal entity, can't be liable. The focus should be on adopting responsible AI practices and decisions, involving analysis of rights to understand Responsible AI, often culture-dependent.
-
Privacy Data Usage Allegations in AI Technology Context
By
–
Your uninformed prejudice won't either.
Tell us exactly what "privacy" you think was "stolen" and "sold"?