Am I missing something, or are all these attempts at recognising LLM outputs obviously destined to fail?
— Jeremy Howard (@jeremyphoward) 5 janvier 2023
It's dramatically easier to train an LLM for rewording a text than creating the text in the first place; then add a loss func that incorporates detection avoidance. https://t.co/TO0zIGHoAe
Am I missing something, or are all these attempts at recognising LLM outputs obviously destined to fail? It's dramatically easier to train an LLM for rewording a text than creating the text in the first place; then add a loss func that incorporates detection avoidance.
Leave a Reply