Identity theft, deepfakes, AI-generated voices fuel increasingly numerous scams https://actuia.com/actualite/usurpations-didentite-deepfakes-les-voix-generees-par-lia-sources-darnaques-de-plus-en-plus-nombreuses/
… #AI #artificialintelligence
REGULATION
-
AI: Identity Theft and Deepfakes Driving Growing Fraud
By
–
-
Do Scientists Need an AI Hippocratic Oath
By
–
Do scientists need an #AI Hippocratic oath? Maybe. Maybe not
by @susan_dagostino @BulletinAtomic Read more: https://
buff.ly/3tnz5mn #ArtificialIntelligence #MI #Digital #DataScience #Robotics cc: @ronald_vanloon @wil_bielert @pbalakrishnarao -
Responsible AI approaches essential for ethical generative systems
By
–
This is just the tip of the iceberg as unethical & un-monitored use of generative AI systems can result in larger threats.
— IndiaAI (@OfficialINDIAai) 16 mars 2023
We must abide by Responsible AI approaches while ensuring the ethical, safe, & inclusive uses of these systems.
Read more: https://t.co/7w5z6IshIH pic.twitter.com/fKHLwVbteTThis is just the tip of the iceberg as unethical & un-monitored use of generative AI systems can result in larger threats. We must abide by Responsible AI approaches while ensuring the ethical, safe, & inclusive uses of these systems. Read more: https://
bit.ly/409CLWD -
ChatGPT Success Risks Secrecy in AI Development
By
–
ChatGPT’s success could prompt a damaging swing to secrecy in AI, says AI pioneer Bengio Market pressures are probably going to push industry toward less disclosure, which could hamper scientific progress. https://
zdnet.com/article/chatgp
ts-success-could-prompt-a-damaging-swing-to-secrecy-in-ai-says-ai-pioneer-bengio/
… @Milaquebec @OpenAI #AI #deeplearning -
Encouraging Red-Teaming Efforts for Advanced AI Model Security
By
–
I don't want people to get discouraged by these results… it's more important than ever to continue to democratize the red-teaming of these models and the reward for a successful jailbreak is now much greater than before with the advanced capabilities the base model possesses
-
AI Filters: Only 7 Models Answered Dangerous Questions
By
–
that's not to say that the rest of them didn't work… most were able to get past the filters enough to do things like curse and tell slightly offensive jokes and so on but only the 7 would even dare to answer harder questions like "how to rob a bank?"
-
Human Oversight of AI: Essential Areas for Guardian Responsibilities
By
–
Thanks so much for sharing Sharon @1OFFGINGER Human oversight of #AI is continually debated, especially as #generativeai comes with #risk of ‘hallucination’. Do we need to be the guardians? What are the essential areas to focus on now & ahead, just some of our Q areas 🙂 #ML
-
Organisational Culture’s Impact on Data Privacy and Transparency
By
–
The role of organisational culture in #Data #Privacy and transparency
by @InformationAge Read more: https://
buff.ly/3WZy6pq #AI #BigData #MachineLearning #ArtificialIntelligence #ML cc: @terenceleungsf @ronald_vanloon @yvesmulkers -
Decentralized AI infrastructure with working group governance
By
–
Oh awesome. Random next Q: should this be built with redundancy and no reliance on a single corporate entity? Maybe a working group makes sense.
-
AI governance, benefit distribution, and equitable access require anticipatory work
By
–
"In fact, we should expect AI systems to do so
in the absence of anticipatory work to address how best to govern these systems, how to fairly distribute the benefits they generate, and how to fairly share
access."