Ouch. But to be fair, this could be similar to DeepSeek mistakenly identifying itself as a ChatGPT model. Likely a result of "poisoned" training data from the internet. In other words, it might just mean it's due to insufficient data filtering & system prompt design.
Training Data Contamination: Why AI Models Misidentify Themselves
By
–
Leave a Reply