L'IA fait peur. Certains y voient une menace pour l'emploi, d'autres y perçoivent un danger pour l'humanité. Pour ma part, j'y vois surtout un levier de progrès et de prospérité. Tel est le sens de la tribune que je signe aujourd'hui dans Les Échos : «IA : n'ayez pas peur !»
REGULATION
-
ChatGPT Privacy Breach Exposes Sensitive User Data to Other Users
By
–
This is a significant privacy breach. People are encouraged to use ChatGPT as a personal tool, so that means everything from sensitive work tasks to health questions have been exposed to other users.
-
Poorly Tested LLM Releases Cause Real-World Consequences
By
–
This is what happens when you release a poorly tested #LLM in the hands of real people who now suffer the consequences @OfficeforAI @SciTechgovuk @theCAIDP @GPAI_PMIA
-
Pausing AI Development: Practical Implementation Challenges Explained
By
–
My favourite part in the public discourse about #AI is the idea that we should "pause" it. Like okay party's over, everybody goes home now! I've never seen anyone explaining how that'd work in practice.
-
Important Preprint on LLMs and Data Rivers
By
–
Important preprint from Sylvie Delacroix on LLMs and Data Rivers. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4388928
… -
Manufacturing Network Security Barriers: Awareness and Complexity
By
–
A6. The biggest barrier to network security is lack of awareness and expertise and the increasing complexity of manufacturing network, with the adoption of more digital technologies and the interconnectivity of various devices and systems. Manufacturers may prioritize meeting
-
Bard’s Training Data Transparency Issues and Speculation
By
–
Bard still seems confused about whether it was or wasn't trained using private data from Gmail (Google says it it wasn't). In reality it probably has zero idea what it was trained since that information was not in the training data, so it's just making guesses.
-
Black Box AI Systems: The Reproducibility and Transparency Crisis
By
–
Without knowing how these systems are built, there is no reproducibility. You can't test or develop mitigations, predict harms, or understand when and where they should not be deployed or trusted. The tools are black boxed.
-
Model Safety: Mitigation Without Full Release, Transparency Needed
By
–
There's a lot of ways to mitigate harms without having to publicly release the entire model. There are many papers on auditing, datasheets, transparency etc. With GPT3 we knew the training data. With GPT4 we don't. Without that, we're all looking at shadows in Plato's cave.
-
Lack of Transparency in AI Model Training Data
By
–
There is a real problem here. Scientists and researchers like me have no way to know what Bard, GPT4, or Sydney are trained on. Companies refuse to say. This matters, because training data is part of the core foundation on which models are built. Science relies on transparency.