We’ll be taking several important safety steps ahead of making Sora available in OpenAI’s products. We are working with red teamers — domain experts in areas like misinformation, hateful content, and bias — who are adversarially testing the model.
Sora safety measures: red teaming for misinformation and bias
By
–
Leave a Reply