AI Dynamics

Global AI News Aggregator

PromptBench: Evaluating LLM Robustness Against Adversarial Prompts

PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts paper page: https://
huggingface.co/papers/2306.04
528
… The increasing reliance on Large Language Models (LLMs) across academia and industry necessitates a comprehensive understanding of their robustness

→ View original post on X — @_akhaliq,

Commentaires

Leave a Reply

Your email address will not be published. Required fields are marked *