One funny thing about the recent rise of LRMs is that the people who were adamant that base LLMs from 2023-2024 could already reason completely missed it, as they didn't know what to look for. You can't notice something you don't expect.
@fchollet
-
Base LLMs Fail at Math, LRMs Make Progress
By
–
Paper below tested a variety of base LLMs (no TTA) on generalization-focus math problems and found that they can't reason and can't do math. All true… but the fact that base LLMs have zero fluid intelligence, while extremely controversial back in 2024, is now well established. An interesting experiment here would have been to try current LRMs on the same problems and measure the delta. I bet latest LRMs can solve most of these problems. arxiv.org/abs/2604.01988 [Translated from EN to English]
-

Autocorrelated Time Series Can Appear Structured Despite Being Random
By
–
Ok, this thread has apparently been a magnet for hordes drooling morons who not only don't get stats, but can't even read. If you're a normally intelligent reader of this tweet, here's an extra example of my point: If you take two RANDOM, INDEPENDENT timeseries (i.e. knowing one gives you NO information about the other) that are each highly temporally autocorrelated (e.g. two random walks), and you plot one against the other as a scatter plot, what you get is a single X/Y trajectory that will ALWAYS look very structured. Yet it is random. Like the figure below. Code to reproduce the figure and play around with this idea: colab.research.google.com/dr… Of course if the two series happen to be correlated, then you will ALSO see something very structured. It's just that this type of visualization is a completely retarded way to look at such data. If you think this is deep, you are innumerate.
-
Fine-tuning Gemma on TPU v5 with Kinetic, Keras and JAX
By
–
Tutorial on fine tuning Gemma on TPU v5 using Kinetic + Keras + JAX. Easiest stack to fully leverage TPUs at scale. Jigyasa Grover ✨ (@jigyasa_grover) Here is a quick start script including the setup, technical details, and a candid look at where Kinetic excels versus its current limitations 🪡 github.com/jigyasa-grover/ki… — https://nitter.net/jigyasa_grover/status/2038707745520812099#m
-
ARC-AGI-3 Games Challenge: Humans vs Advanced AI Models
By
–
Just spent 10 minutes playing the ARC-AGI-3 games and i genuinely cannot get over it. You figure out the rules yourself in like 2-3 minutes. no instructions. just vibes. GPT-5, Gemini 3 and Claude score below 1% on these. Try it yourself: arcprize.org/arc-agi/3
-

Keras Kinetic Fine-Tuning Tutorial for LLMs on JAX TPU Stack
By
–
Good tutorial on using Keras Kinetic to fine-tune LLMs on the Keras + JAX + TPU stack! Kuan Hoong (@kuanhoong) Fine-Tuning Gemma 2B on PubMedQA: Building a Medical Q&A Assistant with LoRA, Keras Kinetic, and Cloud TPU kuanhoong.medium.com/fine-tu… #TPUSprint — https://nitter.net/kuanhoong/status/2039827630661517753#m
-

Kinetic Beta Release – Test and Share Your Feedback
By
–
Try to out and send your feedback — it's in beta for now: github.com/keras-team/kineti…
-

Keras Kinetic: Run Jobs on TPU/GPU in Cloud
By
–
Perhaps the craziest thing that was introduced on the Keras community call today: Keras Kinetic, a new library that lets you run jobs on cloud TPU/GPU via a simple decorator — like Modal but with TPU support. When you call a decorated function, Kinetic handles the entire remote execution pipeline: – Packages your function, local code, and data dependencies – Builds a container with your dependencies via Cloud Build (cached after first build) – Runs the job on a GKE cluster with the requested accelerator (TPU or GPU) – Returns the result to your local machine (logs are streamed in real time, and the function's return value is delivered back as if it ran locally) [Translated from EN to English]
-
New FunctionGemma Guide Released on Keras Hub
By
–
New guide on FunctionGemma: keras.io/keras_hub/guides/fu…
-

Keras Community Contribution: New CLAHE Image Preprocessing Layer
By
–
Keras community member Alan is now presenting the new CLAHE image preprocessing layer — thanks for the contribution!