AI Dynamics

Global AI News Aggregator

REGULATION

  • Stanford HAI discusses global AI landscape at FII Miami event
    Stanford HAI discusses global AI landscape at FII Miami event

    Great to join @drfeifei at @FIIKSA Miami today for a wide-ranging discussion on the global AI landscape. Great conversation on the American AI Export Program, world models, humanoid robots, and what it’ll take to get the new @WhiteHouse National AI Policy Framework legislated.

    → View original post on X — @stanfordhai, 2026-03-28 00:19 UTC

  • NeurIPS 2026 Official Statement: Clarifying Sanctions Tool Link Error and Policy Adjustment

    Yoohoo NeurIPS Conference (@NeurIPSConf) We want to speak directly to the concern many of you have expressed, and we owe you a clear explanation of what happened, why it happened, and where we stand now. We understand this situation caused genuine alarm and we take that seriously. In preparing the NeurIPS 2026 handbook, we included a link to a US government sanctions tool that covers a significantly broader set of restrictions than those NeurIPS is actually required to follow. This error was due to miscommunication between the NeurIPS Foundation and our legal team; there was never an intention to restrict participation beyond our mandatory compliance obligations. The responsibility for that error is ours as an organization, and we deeply apologize for the alarm and impact this miscommunication had on our community. We have updated the link and clarified the text of our policy, which is consistent with that of ACM and IEEE, as well as other international conferences and NeurIPS in the past. As in previous years, NeurIPS welcomes submissions from all compliant institutions and individuals. We want to reiterate that NeurIPS is a community-driven event, created by and for the community, and strives to be inclusive. The NeurIPS 2026 organizing committee was particularly saddened to learn of this institutional miscommunication. The organizing committee has taken on the responsibility of running the conference this year with the goal of fostering open communication, knowledge sharing, and global scientific discourse. We thank the community for bringing this issue to our attention and working with us through this situation. [Translated from EN to English]

    → View original post on X — @shiqi_yang_147, 2026-03-27 10:15 UTC

  • NeurIPS 2026 Clarifies Sanctions Policy After Community Concern

    We want to speak directly to the concern many of you have expressed, and we owe you a clear explanation of what happened, why it happened, and where we stand now. We understand this situation caused genuine alarm and we take that seriously. In preparing the NeurIPS 2026 handbook, we included a link to a US government sanctions tool that covers a significantly broader set of restrictions than those NeurIPS is actually required to follow. This error was due to miscommunication between the NeurIPS Foundation and our legal team; there was never an intention to restrict participation beyond our mandatory compliance obligations. The responsibility for that error is ours as an organization, and we deeply apologize for the alarm and impact this miscommunication had on our community. We have updated the link and clarified the text of our policy, which is consistent with that of ACM and IEEE, as well as other international conferences and NeurIPS in the past. As in previous years, NeurIPS welcomes submissions from all compliant institutions and individuals. We want to reiterate that NeurIPS is a community-driven event, created by and for the community, and strives to be inclusive. The NeurIPS 2026 organizing committee was particularly saddened to learn of this institutional miscommunication. The organizing committee has taken on the responsibility of running the conference this year with the goal of fostering open communication, knowledge sharing, and global scientific discourse. We thank the community for bringing this issue to our attention and working with us through this situation.

    → View original post on X — @hugo_larochelle, 2026-03-27 07:57 UTC

  • Governing the Agentic AI Ecosystem: Future of Autonomous Intelligence

    The Era Of The 'Agentic' Ecosystem: How To Govern A World Run By #AI
    by @gregoriopatino @Forbes Learn more: https://
    bit.ly/4d4k68G #GenAI #ArtificialIntelligence #MachineLearning #ML

    → View original post on X — @ronald_vanloon,

  • Anthropic Wins Preliminary Injunction Against Pentagon Designation

    This is a devastating ruling for the government, finding Anthropic likely to prevail on essentially all of its theories for why the government’s actions were unlawful and unconstitutional. One of the things she mentions is the huge range of amici briefs supporting Anthropic (by the way, 0 supported USG)—so thanks to everyone here who signed on to FAI’s brief, or to one of the many many others. These things do matter. More importantly, you were on the right side of history. On a personal note: some friends and allies of mine on the right have been angry at me for my own words and actions in all this. Anyone who thinks I spoke out for personal gain or trivial reasons against an administration I served in is crazy. It was a hugely costly decision for me. But Judge Lin’s ruling shows why I did it: this is a staggeringly illegal act by the government. That is why I am particularly honored to have been (implicitly) quoted in the ruling for calling this what it was when Secretary Hegseth initially made his announcement: an attempted act of corporate murder. The case continues, but Anthropic has scored a very large win here. The real victors, however, are all red-blooded Americans who are, as the founders would have said, “jealous of their liberties.” Hadas Gold (@Hadas_Gold) BREAKING: Anthropic has been GRANTED a preliminary injunction re: Pentagon 'supply chain risk' designation by Judge Rita Lin in California but is allowing a stay for one week storage.courtlistener.com/re… — https://nitter.net/Hadas_Gold/status/2037306177319587934#m

    → View original post on X — @jeffdean, 2026-03-27 00:18 UTC

  • MIRI Proposes US-China AI Superintelligence Development Halt Agreement

    Speaking on behalf of MIRI TGT (not necessarily MIRI overall) We share many of the same concerns, which is why we structured our model agreement (below) the way we did. It invites broad participation, but also features mechanisms to address states which insist on operating outside of the agreement, while prioritizing the national security requirements of the US and China. So to address question 5 upfront, “should [this] be a global agreement?”: Yes! We think the US and China would be a sufficient seed to get broad participation via their network of allies, superpower status, and AI dominance. Now going point by point: “1. Assuming we achieve the desired policy goal through a bilateral US/China agreement, what would be the specific metric or objective we would say needs to be satisfied in advance? Who decides whether we have satisfied them? What if one party believes we have satisfied them but the other does not?” There are two interpretations of this question. Interpretation (1): what metric is used to determine whether the desired policy goal is being achieved? Interpretation (2): what metric is used to determine when a halt is to terminate? I’ve tried to address both below: The policy goal is to forestall the development of superintelligence long enough for other, better solutions to be realized. It is hard to say what these solutions will be in advance, as humanity is nowhere near being able to align a superintelligence. The field doesn’t have a clear path to solving that technical problem. Furthermore, solving alignment isn’t sufficient on its own, and the other thorny problems (such as concentration of power) require similar focused effort which we aren’t seeing on current timelines. The key metric we use to know if that goal is accomplished is the confidence within the leadership of the US and China that no one is advancing the frontier of AI general intelligence capabilities anywhere. This confidence is reflected by the continued willingness of these actors to participate in the agreement, and springs from a combination of restrictions/controls, transparency, verification, and intelligence gathering. It would be great if we can attain this confidence without much constraint on the beneficial uses of AI we already see today, and our agreement aims to preserve these! The agreement is not accomplishing its aims if only one of these key parties has such confidence. We have tried to accommodate the requirements we think that the USG and CCP would have, but also expect that many details would need to be ironed out through an actual negotiation and implementation effort. “2. If the goal is achieved through a bilateral US/China agreement, would we need capital controls to ensure that U.S. investors cannot fund semiconductor fabs, data centers, or AI research labs in countries other than the U.S. and China?” Yes, just like how the U.S. makes it hard for you to fund terrorists or give money to the North Korean military. “3. Would we need to revoke the passports of U.S.-based AI researchers and semiconductor engineers to prevent them leaving America to join AI-related ventures elsewhere? How else would the U.S. and China keep researchers within their borders?” There will be no shortage of technical work for talented researchers under our proposed agreement, and the best approach is for states to modify their incentives (i.e. pay them well) to act in our collective interest, in the style of efforts like the International Science and Technology Center. In 1994, the ISTC kept former Soviet nuclear researchers employed in peaceful work so that they wouldn’t sell their expertise to proliferators. We anticipate that some researchers will emigrate to non-signatories and pursue covert work, in spite of any efforts. The agreement aims to provide the US and China with sufficient confidence that these efforts will fail through a combination of compute denial, detection, and enforcement. The framing of this question seems to imply that some agreements may only aim to address AI development within the US and China, and that such development must not leave those jurisdictions. We agree that is not viable. We cover this in Article XII. “4. How should we grapple with the fact that (2) and (3) are common features of autocratic regimes? “ It doesn’t look like it takes qualitatively different “autocracy” than was required to prevent the proliferation of nuclear weapons. Limiting the development and deployment of extraordinarily dangerous technology is a feature of our American system of government which prioritizes the defense of individual life, freedom, and property. Preventing you from refining uranium in your basement and assembling a nuke in your garage is an impingement upon your freedom, but that doesn’t mean society should let you do it, and it doesn’t mean the government needs to become an autocracy to prevent it. So too with superintelligence. We charge our military and Intelligence Community with ensuring the safety and freedom of Americans against all threats. Through careful institutional design and adherence to our constitution we can avoid abuse of the power granted by our agreement. As an aside, we believe that the potential for abuse of our agreement is less than the potential for abuse of AI systems developed and employed by the government without constraint, or the potential for abuse in arrangements where the government is allowed to gatekeep access to powerful AI. Read More: An International Agreement to Prevent the Premature Creation of Artificial Superintelligence techgov.intelligence.org/res…

    → View original post on X — @esyudkowsky, 2026-03-26 23:19 UTC

  • How Artificial Intelligence Could Transform Scotland’s Public Services

    How could #AI change Scotland's public services?
    by @BBCPhilipSim @bbcnews Learn more: https://
    bit.ly/4sKkka1 #ArtificialIntelligence #MachineLearning #ML

    → View original post on X — @ronald_vanloon,

  • Export Controls Included in New Measures
    Export Controls Included in New Measures

    nitter.net/AndrewCurran_/status/2… Andrew Curran (@AndrewCurran_) It does include export controls. Specifically: — https://nitter.net/AndrewCurran_/status/2037241871936958557#m [Translated from EN to English]

    → View original post on X — @esyudkowsky, 2026-03-26 19:45 UTC

  • Chip Export Controls: Positive Stance Shift

    Update: I'm told that this bill contains export controls on chips, to prevent them from going to any other country that fails to sign onto safety standards. That's huge and I shift my stance from neutral to positive on the effects of the bill if passed. — Ori Nagel (@ONagel33303) "In a sane world, what happens is the leadership of the United States sits down with the leadership in China and leadership around the world to work together so that we don't go over the edge and create a technology which could perhaps destroy humanity."
    — Bernie Sanders https://nitter.net/ONagel33303/status/2036975303126347960#m [Translated from EN to English]

    → View original post on X — @esyudkowsky, 2026-03-26 19:44 UTC

  • AI Regulation vs Market Competition: Existential Risk Concerns

    That's like saying 'strong gov't controls over nuclear weapons should concern us more than market competition between nuclear weapons producers'. All AI CEOs agree that stronger AI is likely to be an extinction risk — and most are begging for gov't regulation to reduce that risk. And you want to just let them develop unregulated tech than endangers us all? Why exactly do you think that would be prudent?

    → View original post on X — @esyudkowsky, 2026-03-26 19:24 UTC