How can we give AI agents exponentially more room to think and solve complex problems? Researchers from Shanghai Academy of AI for Science, CMU, and others unveil LaPha. This new method trains AlphaZero-like LLM agents in a unique "Poincaré latent space." It leverages negative
LaPha: AI Agents Think Exponentially Better in Poincaré Space
By
–
Leave a Reply