Regulation essential to curb AI for surveillance, disinformation: rights experts
#AI #AIio #BigData #ML #NLU #Futureofwork @TopCyberNews @SpirosMargaris @MarshaCollier @MHiesboeck
@MHcommunicate @Fisher85M @MikeQuindazzi @NealSchaffer
http://
ow.ly/OW4c30sw8oF
REGULATION
-
Regulation Essential to Curb AI Surveillance and Disinformation
By
–
-
OpenAI targets superintelligent AI before 2030 amid climate crisis
By
–
Les deux informations majeures (et peu médiatisées) de cette semaine : 1) OpenAI prévoit l'arrivée d'une IA superintelligente avant 2030. 2) La Terre bat son record absolu de température depuis que les mesures existent. Le futur nous rattrape.
-
Designing AI Oversight Systems for Public Interest
By
–
The critical thing now is to design a sensible system, and agree the benchmarks that will actually offer real oversight, and ensure that oversight is tied to delivering AI that works in the interests of everyone. Let's get started right away.
-
Government Agency for AI Model Audit and Oversight
By
–
It would almost immediately be accused of capture, and might be tempted to softball the audit process. More robust would be a new government agency of some kind, with a clear mandate to audit every model above certain scale and capability thresholds. 3./
-
AI Industry Must Embrace Third-Party Audits Culture Shift
By
–
This would be a big step change, fundamentally at odds with the old skool culture of the tech industry. But it's the right thing to do and its time for a culture shift. We in AI should welcome third party audits. 4./
-
AI Training Audits: Scrutiny for Scale and Capabilities
By
–
It's time for meaningful outside scrutiny of the largest AI training runs. The obvious place to start is "Scale & Capabilities Audits" 1./
-
Industry-Funded AI Consortium: Voluntary Standards Approach
By
–
There are two ways I see this working. Firstly an industry funded consortium that everyone voluntarily signs up to. In some ways this might be quicker and easier route, but the flaws are also obvious. 2./
-
AI Media Surveillance Tool Detects Healthcare Events Outbreaks
By
–
@mukulksachdeva , our Associate ML Scientist, opines on how AI can improve pandemic management. "The AI powered media surveillance tool scans nationwide newspapers in 11 Indian languages to detect adverse healthcare events and potential outbreak events."
-
AI Impact on Our Lives: Next 3-5 Years Forecast
By
–
How will AI change our lives in the next 3-5 years Via @StatistaCharts v/ @JimHarris #ArtificialIntelligence #MachineLearning #ML #ChatGPT #chatbots #chatgpt4 #chatgpt3 #GenerativeAI ##innovation #tech #DataProtection #DataPrivacy #GDPR @jblefevre60 @FrRonconi @CurieuxExplorer
-
Model Safety Evaluation and Jailbreak Robustness Standards
By
–
(2/3) If you are interested in … – Defining evaluations for checking whether a model is safe enough to deploy – Detecting and stop harmful use cases. – Training models to say no to harmful requests and to be robust to jailbreak style vulnerabilities.