Just now, MiniMax responded to the controversy about the license change on the M2.7 model.
@jiqizhixin
-
LLMs Fairness and Consistency in AI Model Evaluation
By
–
Are LLMs truly fair and consistent when judging other AI models? A collaborative team from Peking University, NUS, Institute of Science Tokyo, Nanjing University, Carnegie Mellon, Westlake, and Southeast University has the answer! They introduce TrustJudge, a probabilistic
-
Point-VLA: Robots Understanding Real-World Scenes Accurately
By
–
How can we make robots understand exactly what we mean, even in the most cluttered real-world scenes? Researchers from Tongji University, Shanghai Jiao Tong University, Spirit AI, and Tsinghua University introduce Point-VLA. This innovative system supercharges robot
-
InfoTok: Efficient Long Video Processing Using Information Theory
By
–
How do we process long videos efficiently without losing crucial information? NVIDIA, Stanford University, and National University of Singapore have an answer! They introduce InfoTok, a breakthrough method inspired by Shannon's information theory. It intelligently allocates
-
Goal-VLA: Zero-Shot Robot Manipulation from Images and Instructions
By
–
What if robots could perform complex manipulation tasks with zero prior examples, just from an image and instruction? Researchers from National University of Singapore, The University of Hong Kong, Peking University, and Tsinghua University present Goal-VLA! Their Goal-VLA uses
-
ADM-v2: AI Model for Reliable Future Prediction Offline Learning
By
–
Can your AI reliably predict the future for robust offline learning? Researchers from Nanjing University and the University of Montréal introduce ADM-v2. Unlike prior models that accumulate errors over long sequences, ADM-v2 directly forecasts full episodes, offering
-
OpenResearcher: Open Pipeline for AI-Powered Deep Web Research
By
–
How do we train AI agents to perform complex, multi-step research efficiently and reproducibly? Researchers from Texas A&M, University of Waterloo, UC San Diego, Verdent AI, NetMind AI, and Lambda introduce OpenResearcher, a fully open pipeline that simulates deep web research
-

ByteDance Seed Achieves Zero-Shot Sim-to-Real Dexterous Hand Manipulation
By
–
What if robotic hands could learn complex maneuvers purely in simulation and work perfectly in reality? ByteDance Seed just unveiled a breakthrough. Their new RL framework bridges the sim-to-real gap for dexterous hands with a fast virtual tactile simulation, current-to-torque calibration (no extra sensors!), and actuator dynamics modeling with randomization to cover real-world quirks. Result? Policies trained 100% in simulation directly enable a five-finger hand to perform precise, controllable grasp force tracking and object reorientation – a first for zero-shot sim-to-real multi-finger manipulation! Closing the Reality Gap: Zero-Shot Sim-to-Real Deployment for Dexterous Force-Based Grasping and Manipulation Paper: arxiv.org/abs/2601.02778 Project: dexmanip-seed.github.io/dexm… Our report: mp.weixin.qq.com/s/7TQC0sJfw… 📬 #PapersAccepted by Jiqizhixin
→ View original post on X — @jiqizhixin, 2026-04-07 01:35 UTC
-

Comprehensive Survey on World Models in Artificial Intelligence
By
–
Nice survey! This comprehensive survey tackles the fragmented field of World Models, systems that help AI predict how environments change. It unifies them into four key paradigms, from learning directly from observations to understanding objects and actions. This survey provides a unified map for understanding, comparing, and advancing World Models. It clarifies their performance across robotics, autonomous driving, game simulation, and identifies critical challenges like long-term consistency, charting the course for future AI breakthroughs. Learning to Model the World: A Survey of World Models in Artificial Intelligence Project: github.com/JiahuaDong/Awesom… Paper: techrxiv.org/doi/full/10.362… Our report: mp.weixin.qq.com/s/RYATYwUDg… 📬 #PapersAccepted by Jiqizhixin
→ View original post on X — @jiqizhixin, 2026-04-06 18:27 UTC
-

One-Step AI Image Generation Framework Achieves State-of-the-Art Results
By
–
What if you could generate stunning AI images in a single step, without compromising quality? Researchers from Westlake University, Chinese Academy of Sciences, and DP Technology present a breakthrough. They've introduced a new framework that simplifies the design of 'shortcut' diffusion models. This framework clarifies how to build more efficient one-step image generators by disentangling their core components. Their model achieves a new state-of-the-art FID50k of 2.85 on ImageNet-256×256 with one-step generation, and 2.53 with two steps. Remarkably, it requires NO pre-training, distillation, or curriculum learning! On the Design of One-step Diffusion via Shortcutting Flow Paths Paper: openreview.net/forum?id=k6q8… Code: github.com/EDAPINENUT/Explic… Project: edapinenut.github.io/explici… Our report: mp.weixin.qq.com/s/BptmtBa_O… 📬 #PapersAccepted by Jiqizhixin
→ View original post on X — @jiqizhixin, 2026-04-06 14:23 UTC