PromptBench: Towards Evaluating the Robustness of Large Language Models on Adversarial Prompts paper page: https://
huggingface.co/papers/2306.04
528
… The increasing reliance on Large Language Models (LLMs) across academia and industry necessitates a comprehensive understanding of their robustness
PromptBench: Evaluating LLM Robustness Against Adversarial Prompts
By
–
Leave a Reply