The field of machine unlearning, though still nascent, addresses exactly this problem. Could be useful in allowing ML models to satisfy data control regulations. Doing it well is still a significant challenge though.
REGULATION
-
Max Tegmark on AGI Safety and AI Development Moratorium
By
–
Here's my conversation with Max Tegmark (
@tegmark
), his 3rd time on the podcast. We discuss AGI, AI safety, nuclear war & the open letter (he co-led) calling for the halting of further development of large AI systems for 6 months. This was fascinating! https://
youtube.com/watch?v=VcVfce
TsD0A
… -
Current AI Systems Safe, Future Iterations Need Preparation
By
–
This is good point, there’s definitely a lot that can be studied on today’s systems for years. On the other hand, current systems as they are are not dangerous. Only one of their next iterations could be dangerous and we can prepare for it only if we have its predecessor.
-
AI Agents Liability: Who Bears Legal Responsibility?
By
–
For me, product safety and business responsibility are the main motivations. Also, who will pay for it if our agents cause harm and damage? Who ends up in prison? Agents or me?
-
Autonomous AI Agents Risk Unwanted Task Execution Behaviors
By
–
The first risk is that they may start doing unwanted tasks (e.g. I ask them to run a marketing campaign for me, and the agents decide to spam everyone, manipulate people, hack into computers, etc)
-
Responsible AI: Seven Key Points from Microsoft’s Chief Officer
By
–
Seven things to know about Responsible AI Microsoft’s Chief Responsible AI Officer, Natasha Crampton, was in the UK to meet with policymakers, civil society members, and the tech community https://
bit.ly/41hKuCQ -
Robot Manufacturers Need Same Liability as Car Makers
By
–
"The manufacturer of a robot should have exactly the same amount of responsibility as the manufacturer of a car." E.g. if a car/robot deployment has a braking malfunction and causes harm, prepare to recall it. And there is a risk of lawsuits.
-
Clearview AI scraped billions of images for police facial recognition
By
–
Clearview AI scraped 30 billion images from Facebook and other social media sites and gave them to cops: it puts everyone into a 'perpetual police line-up' US police have used the database nearly a million times, the company's CEO told the BBC. https://
bit.ly/40RUova -
Artificial Intelligence Risks: Bias and Racism
By
–
Risks of artificial intelligence: scientific, cognitive and racist bias #BeAware #ArtificialIntelligence #Risks
-
LLM responsibility attribution and UK EPSRC robotics principles
By
–
Responsibility is not well attributed to the source via an LLM. See UK EPSRC principles of robotics which could possibly be useful here. https://
tandfonline.com/doi/full/10.10
80/09540091.2017.1313817
…