Not really. LLMs screw up on some questions like no human ever would, e.g.: https://
economist.com/by-invitation/
2022/06/09/artificial-neural-networks-today-are-not-conscious-according-to-douglas-hofstadter
…
LLMs Make Unique Mistakes Unlike Human Reasoning
By
–
Leave a Reply