To a degree that may surprise some people, I agree with much of this* from @deanwball and would only add that you don’t have to believe that AGI is remotely close to want to find—ASAP—a regulatory regime that foster innovation but also protects effectively against downside risks like massive cybercrime, Gen-AI influenced delusions, mass disinformation from foreign actors, nonconsensual deepfake porn, etc. We should not dismiss AI; we should not let it run entirely free. We need some middle ground. *I think that current frontiers models are highly capable in some ways but not others, and think in some important ways capability growth bulls have been important wrong (e.g. about how readily hallucinations could be remedied) but I don’t think that changes the need to act now. Dean W. Ball (@deanwball) “Describing highly capable frontier AI models as highly capable” is not “fear mongering.” “Taking AI seriously” is not “fear-mongering.” “Acknowledging obvious, realized or soon-to-be-realized risks” is not “fear-mongering.” The stark reality is that those who have taken AI capabilities growth seriously have been basically right about most important things in the last three years; those that haven’t have been consistently confused and, what’s worse, frustrated at the world about their own confusion. You don’t have to be a mega-pessimist or a “doomer” to take AI seriously. You don’t have to advocate for stark top-down controls over AI. You don’t have to support regulatory capture. It is possible to take AI seriously and advocate for a governmental response that is both effective *and* measured. To the young researchers out there, still trying to make their intellectual fortunes: Do not let anyone tell you otherwise. Do not let anyone bully you into believing otherwise. Think for yourself. — https://nitter.net/deanwball/status/2042685538415841742#m
→ View original post on X — @garymarcus, 2026-04-10 21:46 UTC
Leave a Reply