LLMs reason in discrete tokens, but human thought is fluid, continuous, and abstract. This paper from UCSB and UCSC asks: What if models could reason more like us? Their proposed Soft Thinking is a training-free method that generates soft concept tokens (i.e., weighted mixtures
Soft Thinking: LLMs Reasoning Like Humans Through Soft Tokens
By
–
Leave a Reply