Locked Out: How a Federal Moratorium on AI Regulation Could Reshape Healthcare Oversight

On May 22, 2025, the U.S. House of Representatives passed the “One Big Beautiful Bill Act” (H.R. 1) by a vote of 215-214, which includes a significant provision imposing a 10-year moratorium on state and local regulation of artificial intelligence (“AI”) systems. This provision aims to centralize AI oversight at the federal level, effectively preempting a myriad of state laws and regulations concerning AI. While proponents argue this will foster innovation and prevent a fragmented regulatory landscape, critics raise concerns about its potential impact on healthcare, state sovereignty, and the legislative process itself.

The Proposal

The actual provision in Section 43201 of H.R. 1, titled “Artificial Intelligence and Information Technology Modernization Initiative,” provides as follows:

“. . . no State or political subdivision thereof may enforce any law or regulation regulating artificial intelligence models, artificial intelligence systems, or automated decision systems during the 10-year period beginning on the date of the enactment of this Act.”

The following definitions, found in subpart (d) of Section 43201, apply to the proposed moratorium:

  • Artificial Intelligence: The term “artificial intelligence” has the meaning given such term in Section 5002 of the National Artificial Intelligence Initiative Act of 2020 (15 U.S.C. § 9401), which defines it broadly to include systems that perform tasks normally requiring human intelligence, including learning, reasoning, and self-correction.
  • Artificial Intelligence Model: A software component of an information system that implements artificial intelligence technology and uses computational, statistical, or machine-learning techniques to produce outputs from a defined set of inputs.
  • Artificial Intelligence System: Any data system, software, hardware, application, tool, or utility that operates, in whole or in part, using artificial intelligence.
  • Automated Decision System: Any computational process derived from machine learning, statistical modeling, data analytics, or artificial intelligence that issues a simplified output — such as a score, classification, or recommendation — to materially influence or replace human decision-making.

In its current state, the broad phrasing of the proposed moratorium would ostensibly apply not just to state governments but also to local municipalities, counties, and agencies — effectively preempting all forms of subnational regulation of AI.

Healthcare Implications

State-Level Protections at Risk

Several states have recently enacted or are considering legislation to regulate the use of AI in healthcare, particularly concerning insurance claim processes:

  • California: Enacted the “Physicians Make Decisions Act” (SB 1120), prohibiting health insurers from using AI to deny coverage without human oversight.
  • Connecticut: Proposed SB 817/HB 5590 to prevent insurers from automatically downcoding or denying claims using AI without peer review.
  • Florida: SB 794, which was indefinitely postponed and withdrawn from consideration on May 5, 2025, after being introduced in February 2025, aimed to mandate human review of insurance claim denials and limit the sole use of AI in such decisions.
  • Maryland: SB0987, as proposed, would mandate the Maryland Health Care Commission to establish a registry for AI health software and prohibit the operation of unregistered AI health tools within the state. Additionally, it would restrict health insurance carriers from using AI to make or directly influence healthcare decisions.
  • Massachusetts: Proposed Bill S.46 to regulate the use of AI in healthcare settings, specifically requiring healthcare providers to disclose the use of AI tools in patient care decisions.

These state-level initiatives are aimed at ensuring that AI does not compromise patient care by making autonomous decisions without adequate human oversight. The federal moratorium included in H.R. 1, however, could nullify these protections, potentially leading to increased reliance on AI in critical healthcare decisions without sufficient checks and balances.

Potential Risks

Critics argue that removing state oversight could lead to:

  • Increased Denials: AI systems might deny claims more frequently or inappropriately without human judgment.
  • Lack of Transparency: Patients may not be informed when AI is used in decision-making processes affecting their care.
  • Reduced Accountability: Without state regulations, there may be fewer avenues for patients to appeal or challenge AI-driven decisions.

Federal Preemption and State Opposition

The proposed moratorium has sparked significant opposition from state officials and lawmakers:

  • A bipartisan group of 35 California lawmakers, including Governor Gavin Newsom, urged Congress to reject the provision, citing concerns over public safety and state sovereignty.
  • The National Conference of State Legislatures also recently criticized the moratorium proposal, stating it undermines states’ ability to protect their residents.

Procedural Challenges: The Byrd Rule

Apart from debating the healthcare policy implications of the proposed moratorium, its inclusion in a budget reconciliation bill raises procedural concerns under the Senate’s Byrd Rule, which prohibits provisions in reconciliation bills that are extraneous to budgetary matters. As the moratorium does not directly impact federal spending or revenues, it may be deemed ineligible for inclusion in a reconciliation bill.

Conclusion

The 10-year federal moratorium on state AI regulations presents a complex intersection of technological advancement, healthcare policy, and federalism. While aiming to create a unified national framework for AI oversight, it risks overriding state protections designed to safeguard patient care and autonomy. As the bill moves to the Senate, its fate may hinge on procedural rules and the broader debate over the appropriate balance between innovation and regulation.