The role of memorization and knowledge is to cache & reuse past cognitive work. It should be leveraged as a way to speed up cognition, not as a *replacement* for cognition.
@fchollet
-
Memorizing Reasoning Traces Cannot Replace Creative Innovation
By
–
Simply retrieving a reasoning trace looks a lot like human reasoning, until it's time to navigate uncharted territory. If you memorized all reasoning traces of humans from 10,000 BC, you could automate their lives but you could not invent modern civilization.
-
JAX Solver for Gyrokinetics Achieves 10x Speedup with CUDA
By
–
The power of JAX https://t.co/qtC3JcVik9
— François Chollet (@fchollet) 10 avril 2026The power of JAX Eric Volkmann (@e_volkmann) Introducing gyaradax 🐉: A JAX solver for local flux-tube gyrokinetics with custom CUDA kernels for acceleration. This entire code was vibecoded by @ggalletti_ and me in a month. Validated against GKW (CPU-only Fortran code) with 10x speedups. Details and code in the replies. — https://nitter.net/e_volkmann/status/2041853935430881771#m
-
Science needs models balancing predictive power with simplicity
By
–
Science needs a way to process models that are only "mostly correct" in terms of their predictions, but are very compressive (high ratio between predictive power and model complexity). They are likely onto something.
-
Physics History as Program Synthesis: Kepler and Newton’s Model Search
By
–
We should view the history of physics as a long-running program synthesis task. Kepler and Newton were searching the space of possible symbolic models to find the simplest one that would best satisfy available observations.
-
Meta’s New Model Criticized for Benchmark Optimization Over Real Utility
By
–
The new model from Meta is already looking like a disappointment: overoptimized for public benchmark numbers at the detriment of everything else. Knowing how to evaluate models in a way that correlates with actual usefulness is a core competency for AI labs, and any new lab is unlikely to be successful without first figuring that out.
-

ARC Prize 2026 Launches with $2M in Prizes and L4 Compute
By
–

ARC Prize 2026: ARC-AGI-2 has been upgraded to L4x4s Thank you @kaggle for upgrading compute for all participants ARC Prize (@arcprize) Also live today: ARC Prize 2026 – 3 tracks, $2,000,000 in prizes available! Get involved: • Play a Game: arcprize.org/tasks/ls20 • Build Agents: docs.arcprize.org • Win Prizes: arcprize.org/competitions/20… — https://nitter.net/arcprize/status/2036860092046598213#m
-
Kevin Ellis on DreamCoder: Neurosymbolic AI and Program Synthesis
By
–
On the pod: our most-requested guest! @ellisk_kellis from @Cornell shares the origins of his influential neurosymbolic paper "DreamCoder".
— Ndea (@ndea) 7 avril 2026
Plus: program synthesis, wake-sleep library learning, world models, running an AI research lab, and more. pic.twitter.com/MS0mW0b2y1On the pod: our most-requested guest! @ellisk_kellis from @Cornell shares the origins of his influential neurosymbolic paper "DreamCoder". Plus: program synthesis, wake-sleep library learning, world models, running an AI research lab, and more.
-
Deep Learning Researchers’ Limited Exposure to Alternative Learning Methods
By
–
One thing about DL researchers that has always been surprising to me, is that a lot of them have never been exposed to forms of learning other than fitting the parameters of a curve via gradient descent, and are even unable to conceive that there might exist other options
-
Symbolic Learning vs Curve-Fitting: Reverse-Engineering Generative Programs
By
–
With curve-fitting, you are recording a lossy approximation of the output of some generative program. With symbolic learning, you are losslessly reverse-engineering the source code of the generative program. Symbolic learning won't be the best fit for all problems, but for the ones where the latent program is reasonably simple, it will outperform by many orders of magnitude.