"Can I trust LLM AI to tell me the truth?" is such an interesting question Short answer: no, but it varies depending on the context Expecting it to tell the truth based on its weird opaque blob of matrices derived from its original training data is very risky indeed
Leave a Reply