lol i agree the outputs are ridiculous rn, however, that's not really the point jailbreaks show how hard it is to "align" a model even with the amount of work OpenAI has done if they can't get gpt-4 to operate how they want it to rn, then we will have bigger problems later on
GPT-4 Jailbreaks Reveal Alignment Challenges and Future Risks
By
–
Leave a Reply