AI Dynamics

Global AI News Aggregator

Sora safety measures: red teaming for misinformation and bias

We’ll be taking several important safety steps ahead of making Sora available in OpenAI’s products. We are working with red teamers — domain experts in areas like misinformation, hateful content, and bias — who are adversarially testing the model.

→ View original post on X — @openai,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *