I wrote about the era of Mass Intelligence. GPT-5 and Google's Nano Banana are examples of how advanced AI is now making their way to far more users, at scale, as both performance and efficiency keep improving. We are going to see a lot of weird things happening, all at once.
@emollick
-
Early Reinforcement Learning and Reasoning Chains Development
By
–
Very early days of RL, and we do see this a bit with reasoning chains.
-
Reinforcement Learning Changes LLM Convergence and Compatibility
By
–
This is a pretty important point, we have relied on all LLMs being broadly similar to each other (even to the extent that prompting is compatible across models). That may start to change with reinforcement learning.
-
LLMs Beyond Matrix Multiplication: Understanding Model Capabilities
By
–
Yes. yes LLMs are not just matrix multiplication but adding that there are non-linear functions as well doesn't really do anything to resolve the central mystery of why these models can do what they do. And here is the source of Wolfram's paragraph:
-
LLM Limitations and Rapid Progress in Image Capabilities
By
–
I agree that it is a problem that the models have no idea of their own limits, it is one of many issues that make LLMs hard to use. And yes, agree image comprehension and image creation are both limited, but the evidence suggests pretty rapid improvement & some real utility.
-
AI Vision Models: Weaknesses in Counting and Image Generation
By
–
Clear weak spots remain counting, generating alternate images when the training data is thick (full glasses of wine, clocks with oddly set hands), etc. It isn't hard to make them fail. But there is a lot they do very well, and the gains have been pretty quick so far.
-
Image Generation Progress: Spaghetti Forks and LLM Limitations
By
–
Well, I got six forks made of spaghetti on the first try, but one is a double-sided fork It is pretty amazing how far imagegen has come in the past years (they aren't flawless, but this would have been impossible months ago). Yet they aren't really a good measure of LLM ability
-
Why LLMs Should Weird You Out: Understanding Their Capabilities
By
–
I don't know anyone who uses LLMs who is not occasionally weirded out by what they can do. If you are not, you should be. They are weird. Wolfram had a rather startling (at least at the time) theory after using ChatGPT. Understanding whether he is right is important.
-
The Deep Mystery: How LLMs Simulate Human Thought
By
–
We really have not made a lot of progress on explaining the deep mystery of LLMs: How does a model using matrix multiplication to predict the next word manage to simulate human thought well enough to do all the very human-like things it does? And what does that mean about us?
-
How Do LLMs Simulate Human Thought Despite Small File Size
By
–
I think it is actually makes LLMs even weirder! The next question is "how does a file the size of a moderately sized video game simulate human thought" and I don't think we have good answers.