AI Dynamics

Global AI News Aggregator

GPT-4 Jailbreaks Reveal Alignment Challenges and Future Risks

lol i agree the outputs are ridiculous rn, however, that's not really the point jailbreaks show how hard it is to "align" a model even with the amount of work OpenAI has done if they can't get gpt-4 to operate how they want it to rn, then we will have bigger problems later on

→ View original post on X — @alexalbert__,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *