I saw it happening from well before GPT-1, which is why I tried to warn the public for years. The only one on one meeting I ever had with Obama as President I used not to promote Tesla or SpaceX, but to encourage AI regulation.
REGULATION
-
Direct Experience and Independent Research Find Deeper Truth
By
–
Your direct experience, people you talk to in the subject area & independent research will get you much closer to the truth
-
Who Owns Songs Created by Artificial Intelligence?
By
–
Who Owns a Song Created by #AI? https://
nyti.ms/3MFlXn4 -
French Pension Reform Debate Work Hours Policy
By
–
Bon, il aurait pu aussi dire qu'il fallait repasser aux 42h aussi 🙂 #Retraites
-
Responsible Technology Use Requires Guardrails and Precedent
By
–
Most folks are mischaracterizing the argument to be about #AGI when it's about responsible use of technology (like any other), there is more guardrails needed now, than just leaving to the good of humanity – and there is precedence for that, we need to move fast!
-
Responsible Technology Use Requires Guardrails Beyond Good Intentions
By
–
Agree that part was absurd! I think most folks are mischaracterizing the argument to be about #AGI when it's about responsible use of technology (like any other), there is more guardrails needed now, than just leaving to the good of humanity – and there is precedence for that.
-
Machine Unlearning: Enabling ML Models to Comply with Data Regulations
By
–
The field of machine unlearning, though still nascent, addresses exactly this problem. Could be useful in allowing ML models to satisfy data control regulations. Doing it well is still a significant challenge though.
-
European Legal Compliance for Information Sources and AI
By
–
est-ce que les livres, les encyclopédies, la langue, ou parler a un humain qui nous donnerait des informations fausses par extension sont contraire au droit européen ?
-
Max Tegmark on AGI Safety and AI Development Moratorium
By
–
Here's my conversation with Max Tegmark (
@tegmark
), his 3rd time on the podcast. We discuss AGI, AI safety, nuclear war & the open letter (he co-led) calling for the halting of further development of large AI systems for 6 months. This was fascinating! https://
youtube.com/watch?v=VcVfce
TsD0A
… -
Current AI Systems Safe, Future Iterations Need Preparation
By
–
This is good point, there’s definitely a lot that can be studied on today’s systems for years. On the other hand, current systems as they are are not dangerous. Only one of their next iterations could be dangerous and we can prepare for it only if we have its predecessor.