Bard still seems confused about whether it was or wasn't trained using private data from Gmail (Google says it it wasn't). In reality it probably has zero idea what it was trained since that information was not in the training data, so it's just making guesses.
REGULATION
-
Black Box AI Systems: The Reproducibility and Transparency Crisis
By
–
Without knowing how these systems are built, there is no reproducibility. You can't test or develop mitigations, predict harms, or understand when and where they should not be deployed or trusted. The tools are black boxed.
-
Model Safety: Mitigation Without Full Release, Transparency Needed
By
–
There's a lot of ways to mitigate harms without having to publicly release the entire model. There are many papers on auditing, datasheets, transparency etc. With GPT3 we knew the training data. With GPT4 we don't. Without that, we're all looking at shadows in Plato's cave.
-
Lack of Transparency in AI Model Training Data
By
–
There is a real problem here. Scientists and researchers like me have no way to know what Bard, GPT4, or Sydney are trained on. Companies refuse to say. This matters, because training data is part of the core foundation on which models are built. Science relies on transparency.
-
Google’s Bard Error Claims Face Credibility Questions
By
–
To be clear, I'm taking their claim at face value for now. But I find it funny that the only way to emphasize that it isn't trained on Gmail data, is to state that it can make mistakes. A lose-lose situation, esp given the history https://
theverge.com/2023/2/8/23590
864/google-ai-chatbot-bard-mistake-error-exoplanet-demo
… -
Google Confirms No Private Data Used in Training
By
–
So Google has responded to say that no private data was used in training. Good to have this confirmed on the record. (Maybe it’s Barb to its friends) https://
x.com/GoogleWorkspac
/GoogleWorkspace/status/1638312370534600705
… -
Concerns about Bard’s training data including Gmail
By
–
Umm, anyone a little concerned that Bard is saying its training dataset includes… Gmail? I'm assuming that's flat out wrong, otherwise Google is crossing some serious legal boundaries.
-
GPT-4 Errors in Medical Notes: Nuance’s Safety Concerns
By
–
“[GPT-4] makes mistakes and it hallucinates and it omits things,” says Microsoft-owned @NuanceInc which summarises medical notes. “[So] we’re picking applications…where the technology comes to bear in a way that..if it makes a mistake or two, not going to hurt anybody.” ???!!!
-
Real-time drone surveillance system can track violence in public
By
–
Real-time #drone surveillance system can track violence in public. Would it make us safer?#AI #technology #EthicalAI #techforgood #MachineLearning #WomenInSTEM pic.twitter.com/PWqdqKuMOv
— Helen Yu (@YuHelenYu) 21 mars 2023Real-time #drone surveillance system can track violence in public. Would it make us safer? #AI #technology #EthicalAI #techforgood #MachineLearning #WomenInSTEM
-
Generative AI in Healthcare Requires Thoughtful Governance Approach
By
–
#generativeai is transformative but a thoughtful and deliberate approach is needed to drive impact in Healthcare #healthcare #ai #patientexperience agree with Grace’s POV- Technology is progressing fast but is no excuse to skipping the governan…
https://
lnkd.in/gPbu8qFD