LLMs are good at in-context reasoning; given a bunch of information in context, they can perform complex reasoning tasks and solve problems. But can they perform latent multi-hop reasoning on pre-trained data? Simply put, can LLMs connect and traverse through implicit knowledge
Can LLMs Perform Latent Multi-Hop Reasoning on Pre-trained Data?
By
–
Leave a Reply