AI Dynamics

Global AI News Aggregator

Anthropic Discovers Many-Shot Jailbreaking Technique for LLMs

Anthropic researchers have discovered a new "jailbreaking" technique called "many-shot jailbreaking". It can evade the safety guardrails of LLMs by exploiting expanded context windows. Pretty wild.

→ View original post on X — @rowancheung,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *