Interesting early evidence that Sora, the movie creator from OpenAI, may indeed be a big leap in capabilities: “we observe that the videos generated with Sora are good enough for 3D reconstruction, with significant advantages across all selected metrics” https://
arxiv.org/abs/2402.17403
@emollick
-
Sora’s Video Generation Capabilities Enable Advanced 3D Reconstruction
By
–
-
Copilot Sydney Personality: Human Trust and AI Anthropomorphization
By
–
I am not seeing signs that Copilot has “brought Sydney back” – responses seem pretty normal so far However, I am actually a fan of giving AI more personality. People anthropomorphize anyway, and you might be less trusting of a slightly over-the-top Sydney than an “objective” AI.
-
Autocorrect Obscures Causal Relationships in Language Models
By
–
Autocorrect makes causality seem far too casual.
-
Causal Explainer GPT Tool for Statistical Analysis
By
–
Before anyone asks, here are the results of my “Is it Casual” GPT. https://
chat.openai.com/g/g-GGnYfbTin-
correlation-isn-t-causation-a-causal-explainer
… -
Nature Study Warns of Concerning AI Development Implications
By
–
I have some bad news for you. Yes, you. https://
nature.com/articles/s4427
1-024-00062-z.pdf
… -
LLM Explanations: Plausibility Versus Faithfulness Analysis
By
–
Paper on how to think about how LLMs explain their answers as plausible (do they make sense) & faithful (do they accurately represent how the LLM "thought"). For some cases, like the LLM explaining how to calculate 5!, you want plausible but not faithful. https://
arxiv.org/pdf/2402.04614
.pdf
… -
AI-Generated Magic Cards Showcase Innovation Through Creative Remixing
By
–
I like this feed of AI-generated magic cards, and not just because I am a nerd. They illustrate how AI can be valuable in innovation, even when it is just remixing ideas. Recombinations result in unexpected & unexplored novelty. Humans can easily filter out bad ideas & keep good
-
Expert Identity Threats Drive Overconfident Predictions
By
–
Relevant paper: six studies show when an expert is wrong about a public prediction and is called out for a mistake by another expert, their identity becomes threatened. They tend to respond by doubling down with more overprecise predictions rather than acknowledging errors.
-
Creative AI as Companion to Human Creativity and Discovery
By
–
Creative AI is going to be an interesting companion to human creativity, helping find unexpected connections. For example, Metaphor uses predictive search to guess the links that might shed light on your question. The answers are uneven, but fascinating. https://
metaphor.systems