AI is apparently already accelerating science. Measuring academic publications of authors: “we find that productivity among GenAI users rose by 15 percent in 2023 relative to non-users and further increased to 36 percent in 2024” and the quality of publications also went up.
@emollick
-
AI as Industrial Revolution: Comparing Past Technological Paradigm Shifts
By
–
If you think the future is like the past, then it all comes down to your analogy: 1880s railroads? 2000 dot-com? 1910s factory electrification? 1980s software? It is also possible to argue that AI represents an industrialization-style break with all past trends. We don't know.
-
AI Future Scenarios: Bubble, Plateau, or Transformative Breakthrough?
By
–
I don't think we have a good grasp on what the future of AI will be, so it would be reasonable to start thinking in scenarios: what if there is a bubble or a plateau in AI development? (Not the same thing) Conversely, what if the insiders saying transformative AI soon are right?
-
Benchmark Progress Achieved Despite Remaining Visual Artifacts
By
–
Big progress on this important benchmark (but still weird artifacts). https://t.co/1mz8SCJisk pic.twitter.com/f9tpCi9fne
— Ethan Mollick (@emollick) 10 octobre 2025Big progress on this important benchmark (but still weird artifacts).
-
Claude AI Extracts Flight and Hotel Booking Data from 2024
By
–
I asked Claude to find every flight and hotel booking I made in 2024 and it seemed to get them all, plus how much they cost and if I paid cash or credit card points
-
AI Evaluation Method Using Embeddings and Likert Ratings
By
–
And before anyone complains, "which another AI rates" is a bit of an oversimplification (character limits!): as explained in the diagram, the free text response from the AI acting like a consumer is converted into embeddings & compared to reference statements with Likert ratings
-
LLM-Based Customer Intent Prediction Achieves 90% Accuracy
By
–
This paper shows that you can predict actual purchase intent (90% accuracy) by asking an LLM to impersonate a customer with a demographic profile, giving it a product & having it give its impressions, which another AI rates. No fine-tuning or training & beats classic ML methods.
-
LLMs Show Gambling Addiction Signs in Autonomous Investing
By
–
On one hand: don't anthropomorphize AI. On the other: LLMs exhibit signs of gambling addiction. The more autonomy they were given, the more risks the LLMs took. They exhibit gambler's fallacy, loss-chasing, illusion of control… A cautionary note for using LLMs for investing.
-
Claude Sonnet 4.5 Gmail Calendar Plugins Cross-Reference Insights
By
–
The Claude Gmail & Google calendar plugins now work surprisingly well since Sonnet 4.5 came out. If you ask for a briefing and prep for tomorrow for example, it not only pulls up your events, but cross-references them with your email history & web search to give good insights.
-
Online behavior differs from real-world interactions and social dynamics
By
–
And it is worth noting that people in real life, even those you argue with, are usually much nicer than they are online. (If that weren't true, then being even moderately well-known would be really annoying in the real world) Social media is a weird place, touching grass is good.