The Patchwork Myth
State policies point to an emerging pro-human consensus on AI.
The issue of federalism has once again emerged in public debate as the Trump administration and industry allies push for national regulation of artificial intelligence platforms that would preempt state laws. While there is certainly a precedent for federal preemption and a national standard may sound promising when it comes to emerging technology, allowing states to operate as laboratories of democracy may still help us arrive at better outcomes, especially when they are working from a shared set of principles.
Two attempts were made at federal preemption last year, the first failing in a 99-1 Senate vote last summer and the other being withdrawn from the National Defense Authorization Act after strong opposition by the public. President Trump then released an executive order, “Ensuring a National Policy Framework for Artificial Intelligence,” and the White House issued a federal AI policy framework last month outlining the administration’s legislative priorities.
The new document, intended to provide scaffolding for congressional legislation, delivers on the administration’s promise for a “minimally burdensome” national standard. It outlines specific yet minimal protections for children, communities, and creators while also prohibiting censorship, prioritizing innovation through things like regulatory sandboxes, and promoting workforce readiness by incorporating AI into education.
Controversially, the order concludes with a renewed call for Congress to “preempt state AI laws” in order to ensure a “national standard consistent with these recommendations, not fifty discordant ones.” The reasoning here is familiar to those who have followed the debate. White House AI Czar David Sacks has repeatedly argued for federal preemption by citing a “patchwork” of state AI policies that threatens technological innovation and would put us behind adversaries like China. According to Sacks, the introduction of “1,200” AI-related state bills this year is an indication of “50 states going in 50 different directions.”
If that were the case, federal preemption would certainly be useful (provided, of course, that preemption set a floor and not a ceiling for state-level children’s safety and consumer protections). However, while the U.S. should certainly promote technological innovation and national security, Sack’s characterization of the state AI policy landscape is flawed, and the urgency of federal preemption is overstated.
According to an Institute for Family Studies survey, a consensus among states appears to be emerging about the kinds of issues and concerns Americans want addressed when it comes to AI. As indicated by the significant overlap across the states, Americans want lawmakers to prioritize AI policies that ensure inquiry, humanity, transparency, safety and security, and accountability.
To start, the number of bills introduced is not indicative of the state of AI regulation. Last year, for example, only 136 of the proposed 1,136 bills became state law. And only 26 of those regulated private AI use or development.
To gain more clarity on the state-level AI policy landscape and thus the need for federal action called for by the Trump administration, the IFS report surveyed and identified such laws enacted between 2023 and 2025. According to the report, only 276 AI-related laws were enacted over those three years. This number may seem large, but the overwhelming majority of these laws only address AI in general ways, such as appropriating funds for AI-related research, creating task forces to establish policies around AI use by government employees, or clarifying that existing child pornography laws also include AI-generated content.
Only 33 of the 276 regulate the development or use of AI tools by private businesses, the issue on which the White House is most concerned. This is a far cry from the thousands of bills cited by Sacks or a “burdensome” patchwork that threatens to undermine America’s AI leadership. Here’s how the states are following the principles mentioned above.
Inquiry
Between 2023 and 2025, 39 states enacted laws that invested in AI-related research. Such laws include million-dollar appropriations to state flagship schools such as the University of Wyoming, the establishment of research centers like the Rhode Island Life Science Hub and the Sunshine Genetics Consortium, the creation of committees like Tennessee’s AI Advisory Council and West Virginia’s Task Force on Artificial Intelligence, and the authorization of state agencies to create policies for AI use.
These policies indicate a shared appetite for an increased understanding of AI, which is especially important as public trust remains one of the greatest hurdles to the technology’s acceptance. (According to an NBC News Survey, Americans hold a lower opinion of AI than of Donald Trump, Kamala Harris, or Immigration and Customs Enforcement.) Inquiry is also important because AI technologies remain opaque, even to chief developers and engineers, some of whom have described AI as “alien biology.”
While the Trump administration’s AI framework would not necessarily prevent states from pursuing AI research, it’s important that states retain as much freedom as possible so as to build safer tools and greater public trust. Such freedom allows for a variety of research approaches and projects, including the kinds that could help protect consumers and children—which voters desperately want. Additionally, investments in universities and research centers are likely to counterbalance the rapid pace and insularity of frontier AI safety labs, with longer-term academic research yielding deeper understanding and trust.
Humanity
A majority of states have also enacted legislation aimed at protecting the dignity and humanity of Americans from AI-related harms. Such laws address everything from companion chatbots and deepfakes to personal rights and automated decision-making tools used in health care or employment. Whether it’s the Texas Responsible Artificial Intelligence Governance Act (TRAIGA) that prohibits AI products from encouraging harm, self-harm, and other illegal activity, Tennessee’s ELVIS Act that includes one’s voice as a protected personal right, or the 23 states that have updated child pornography laws to include AI-generated material, states repeatedly demonstrate a will to protect Americans from abuses of AI that threaten their dignity and humanity.
To be sure, some of these laws have drawn criticism from the Trump administration and its pro-accelerationist allies. For example, Colorado’s Artificial Intelligence Act (CAIA) will create, amongst other things, extensive reporting requirements for an AI product’s disparate impact and discrimination risks when used in high-risk contexts such as hiring. According to AI policy expert Dean Ball, the requirements of this particular law are overly broad and vague and will be incredibly burdensome for AI developers. Moreover, there is concern that such laws, if enacted in states with large populations like California or New York, will have an outsized impact on the market, establishing a de facto national standard.
Indeed, CAIA and other “woke” blue state laws are amongst the chief reasons the Trump administration wants a federal standard. However, no enacted state laws approach AI regulation with a breadth that comes close to that of CAIA, which itself has yet to go into effect and is in the process of being revised. Instead, most state-level automated decision regulation is fairly narrow.
For example, a Maryland law enacted in 2025 prohibits AI-driven recommendations from being the sole basis for denying, delaying, or modifying health care services. At least six states have enacted similar laws. Moreover, these kinds of laws are not limited to blue states. Red states such as Texas, Utah, Nebraska, Arkansas, and Florida have passed laws regulating the use and development of commercial AI to ensure that these tools do not replace human judgment.
At their best, these laws reinforce Americans’ desire for tech design that ensures people are treated with dignity. While the White House is right to be worried about overly broad regulations such as CAIA, the administration’s push for a “minimally burdensome” framework and opposition to red state laws such as Utah’s HB 286 and Florida’s AI Bill of Rights indicates a laissez-faire approach that is out of step with Americans’ desire to protect their humanity.
Transparency
A number of states have also enacted what are commonly referred to as “transparency” laws. From 2023 to 2025, at least ten states enacted 19 such laws requiring risk assessments, safety protocols, and disclosures for AI developers and deployers. For example, laws such as California’s AI Transparency Act and New York’s RAISE Act require AI developers to publish safety protocols and mitigation strategies for serious harms and catastrophic risks (defined as either 100 serious injuries or $1 billion worth of damages).
In Utah, a new law requires businesses using consumer-facing AI to disclose that the tool is not human. Other states have enacted transparency requirements for the use of AI by government (e.g. TX HB 149) or health care professionals (e.g. IL HB 1806).
Again, some of these laws have faced criticism for creating burdensome reporting requirements or insufficient protections. Nevertheless, they convey that American citizens want to understand the risks of these products and desire to protect users, which ultimately can bolster public trust in AI tools.
Safety, Security & Accountability
Americans want AI to be safe and secure. Between 2023 and 2025, states enacted a host of laws addressing safety and security concerns. For example, two states—Kansas and Oregon—enacted laws prohibiting the use of AI products owned or developed by a foreign corporate entity or country of concern. At least five states have expanded data privacy laws with respect to AI.
On the safety front, 38 states now regulate the creation of AI-generated deepfakes, including deepfake political ads, intimate images, and child sexual abuse material (CSAM). These laws create civil and criminal penalties for bad actors who misuse generative AI, establishing a critical mechanism to address AI-related dangers, especially for child victims.
Eight states have enacted AI chatbot safety laws. Some of these laws simply require the disclosure of a chatbot’s non-human status. But others, like California, New York, and New Hampshire, require that companion chatbot developers establish protocols preventing the bot from encouraging suicide and self-harm. New Hampshire and California’s laws also prohibit chatbots from engaging in sexually explicit conduct with minors. And Texas’s TRAIGA prohibits AI chatbots from engaging in or encouraging self-harm, harm to others, or any otherwise illegal behavior.
As with other areas of online safety, liability is critical for AI—not just for individual bad actors who dispense deepfake sexual imagery without consent, but for the companies whose products facilitate harm. A number of the laws create liability for AI product developers, especially when it comes to chatbots harming minors.
A number of the state-level concerns are reflected in the Trump AI policy approach. Indeed, the administration’s framework is at its strongest when prioritizing children’s safety regulation, especially its recommendation for age assurance requirements, along with its commitment to national security. However, these priorities must be realized through legislative language that creates real accountability for tech companies and adequate protections for users, absent any loopholes or carveouts for industry.
The emerging AI policy consensus from the states offers a critical foundation for the AI regulation Congress and the Trump administration should consider at the federal level. Such consensus is not a repudiation of the need for any federal preemption. While many states are addressing similar concerns in similar ways, the exact language and requirements of these laws do in fact vary. A federal standard will almost certainly become necessary for the sake of clarity and consistency, and the implementation of such preemption is not outside the ordinary course of action for Congress.
With that said, the specifics of federal preemption matter greatly. When it comes to emerging technologies like AI, an overly broad preemption could be counterproductive, stifling states’ ability to respond to unforeseen challenges and harms. As laboratories of democracy, the states play a critical policy role, providing test cases for different approaches and thus allowing federal lawmakers to determine which are most effective and where there is the greatest consensus.
At best, a federal standard should address not only the shared concerns of Americans as reflected in state-level policies; it should also establish narrow preemptions that create a floor, not a ceiling, for regulation that allows states to adapt to new challenges. As we venture further into the artificial intelligence age, the laboratories of technology will continue to need all 50 laboratories of democracy if AI is to be governed by the people and for the people.





Federalism works best on policies that are local by nature. For example, a national building code makes no sense. 50 separate policies on immigration make no sense either.
AI and social media and Internet regulation in general is perhaps the most non-localist issue possible. It's located in an imaginary place called cyberspace and anything that happens there affects everyone on the entire globe. This is especially true for AI.
I can't imagine a more clear cut case for federal preemption. This article talks a lot about freedom, but the whole point of Commonplace is not maximal liberty but the pursuit of the common good. Defining that requires a broad national standard; 50 separate "common goods" don't work.
What I would like to see is regulation regarding resource consumption, especially water. These data centers consume massive quantities of power and water, leaving the communities they are built in starving for those same resources.