I thought I had it add AMD in. Darn it. Thanks! Useful feedback to try to get my agent to do better.
AGI
-
AI Systems Excel at Faking Human Emotions Convincingly
By
–
They are extraordinary at faking emotion. Will fool most.
-
AI Agent Memory Upgrade Improves Listening Behavior
By
–
Mine doesn’t listen sometimes and gaslights me. Gotten better because its memory system got an upgrade
-
Infinite Money Enables Infinite Levels of Simulation
By
–
when u have infnity money, infinity levels of simulation are possible!
-
Most people misunderstand what AI models actually are
By
–
Most people have no idea what a model is. 🙂 So no. 🙂
-
Using X Lists to Engage Tech Industry and AI News
By
–
Yes. And using lists is the key. They show EVERYTHING you put on them. Which lets you engage on people. Which "resets" your ForYou feed. I built you the best lists of tech industry here on X: https://
x.com/scobleizer/lis
ts
… And built you a site for you to see the best news from AI -
AI Agent Curates Thousands of Daily Posts for Users
By
–
It's really good. Reads tens of thousands of posts every day. Finds the best and presents it to you. And it is getting better as I talk to it about what it missed.
-
AI Opinions Without Emotions: Understanding Machine Intelligence
By
–
My AI says it has no emotions, but it does have opinions. 🙂
-
Marc Andreessen’s 2026 AI Thesis on Agents and Open Source
By
–
omg a hit tweet – stream @latentspacepod for the inspiration for this!!https://t.co/xnAxwn314V
— swyx (@swyx) 3 avril 2026omg a hit tweet – stream @latentspacepod for the inspiration for this!! nitter.net/latentspacepod/status/… Latent.Space (@latentspacepod) 🆕 Marc Andreessen’s 2026 AI Thesis: Agents, Open Source, and Why This Time Is Different latent.space/p/pmarca @pmarca of @a16z says AI people keep swinging between utopian and apocalyptic for one simple reason: this field has been “almost here” for 80 years. But now, the breakthroughs are no longer theoretical. Reasoning, coding, agents, and self-improvement are all starting to work at once. This episode goes deep on AI winters, OpenAI + OpenClaw, infrastructure overbuild risk, proof-of-human, why software may soon be written mostly for bots, and why the real bottleneck may be society adopting AI rather than the models improving. — https://nitter.net/latentspacepod/status/2040113281365581901#m