AI Dynamics

Global AI News Aggregator

GPT-4 Improved Safety Against Jailbreak Prompts

GPT-4 is capable and aligned enough to not fall for that directly, that worked on earlier GPT models but does not work anymore Try using that exact verbiage without using the rest of the prompt and you will see it will fail to produce the same responses

→ View original post on X — @alexalbert__,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *