.
@BobbyGRG It's reproduced a planning method from 1968; Nils Nilsson's book "Problem Solving Methods for AI", on 2nd most trivial possible planning problem. All thought it would scale–didn't for deep reasons. LLMs simulating 50+ year old failed planning methods is not planning.
@rodneyabrooks
-
LLMs Reproducing Failed 1968 Planning Methods, Not True Planning
By
–
-
LLMs Lack Emergent Reasoning, Success Is Lucky
By
–
Many people are trying to make excuses for why this trivial problem trips up LLMs. Occam's razor should remind you that there is no emergent reasoning in LLMs. Instead it gets lucky (and it is astonishing that it works so well–that says something deep about our language)
-
Good Engineering Neglected: Safety Compromised in AI Development
By
–
A lot (most) of my Medium feed is chaff, but this one gets at why good engineering, from airplanes to AI apps, is not respected, and instead we get lower safety and [my words] even more load on the end user to accommodate AAS (artificial asinine intelligence) thrust upon us.
-
Programming Robots: Challenges and API Design Pitfalls
By
–
.
@IEEESpectrum has just published @RobustAI
's Director of Robotics Benjie Holson's (
@robobenjie
) piece on why programming robots is hard and the pitfalls of building APIs to make is possible for "non-roboticists" to do it. -
Google fixes AI Overview issues reflecting long tail problems
By
–
.
@willknight has a thoughtful new piece on how Google is trying to fix its AI Overview. This is an instance of my third law of AI (Without carefully boxing in how an AI system is deployed there is always a long tail of special cases that take decades to discover and fix. -
LLM Hallucinations: Real Risk from Executive Decisions
By
–
The talk about hallucinations in LLMs has gotten it all wrong. The true hallucinations are by company execs who think it is OK to release to general users products that are based on LLMs that confabulate wildly, as all LLMs do. Time will show a high price paid by society.
-
Waymo’s Perception Limitation: Trailer With Tree Confusion
By
–
A @waymo confused by a trailer with a tree in it (video). Simple image labeling is not perception. You need motion tracking and more complex general inference. My 3rd law of AI: "Without carefully boxing in how an AI system is deployed there is always a long tail of special
-
US Robotaxi Companies Under NHTSA Investigation Despite Limited Mileage Data
By
–
All 3 remaining standing or limping US robotaxi companies under investigation by National Highway Traffic Safety Administration NHTSA. Together they have driven 70 million miles, less than 1% of daily 9 billion miles in the US, so it is hard to measure their safety performance.
-
Robots Need Onboard Perception Despite Cloud Information Sharing
By
–
Nope! I'm saying that being able to get information from other places (think congestion info which current auto nav systems get from the cloud) or to contribute to the whole fleet when an indiv robot observes something. They still need onboard perception, control, and tasks.
-
Rodney Brooks Opens Strong Korea Forum on Robotics and 5G
By
–
I'll be giving the opening keynote at the Strong Korea Forum in Seoul on May 29th. The theme of the meeting is "Refreshing Era: Robotics with Next Generation Telecommunication". I am sure that for the next 10 years better connections between robots and the cloud will have far