AI Dynamics

Global AI News Aggregator

Model Safety: Mitigation Without Full Release, Transparency Needed

There's a lot of ways to mitigate harms without having to publicly release the entire model. There are many papers on auditing, datasheets, transparency etc. With GPT3 we knew the training data. With GPT4 we don't. Without that, we're all looking at shadows in Plato's cave.

→ View original post on X — @katecrawford,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *