Imagine this: You’re waiting your turn at the DMV only to discover that it’s under new management. The federal government announced a ban on state driving laws just this morning. No longer will states be allowed to set their own public guardrails like speed limits. Instead, Washington will write the rules—maybe.
Because while calling for one set of national rules, Washington officials aren’t convinced driving rules are necessary for safe roads. By combining generally applicable, “technology neutral” laws, they argue states can manage reckless driving through broad prohibitions on violence and negligence. Why do we need to penalize the automobile industry if we already have laws that penalize anti-social behavior? We do not need seatbelt and traffic light laws.
Doesn’t sound so great, does it? In practice, this method provides no clear, enforceable safety standards before harm occurs—and it also strips away the ones that already exist. That, in essence, is what blanket preemption of state artificial intelligence laws would do.
Both Congress and the Trump administration are considering actions that would freeze state AI laws before any federal standards are written or adopted.
Over the summer, the Senate rejected this approach in a 99-1 vote, repudiating an amendment that would have shut down state AI protections without replacing them with consumer safeguards.
Last week, the same idea failed to garner enough support for inclusion in the National Defense Authorization Act. On Thursday, the president signed an executive order intended to selectively restrain state AI laws until Congress passes a legislative package.
The effort continues, dressed in the language of national competitiveness.
Preemption advocates argue that a patchwork of state laws will slow innovation, raise compliance costs, and weaken America’s ability to compete with China. There are legitimate concerns here.
They also point to “tech neutral” laws as a sufficient way to protect child safety, creator rights, and worker protections.
That argument is deeply flawed—historically, legally, and morally.
When laws cannot name technology and address its specific risks, governance fails. General conduct laws and the tort system play an important role after the harm occurs. But relying on lawsuits alone abandons government’s foundational ex ante responsibility: to prevent proven, anticipatable harm before it devastates families. Washington now stands on the edge of making this fundamental mistake with artificial intelligence.
This week’s “60 Minutes” episode exposed such dangers. Unregulated AI chatbots isolate children from their parents, steer them into sexually explicit conversations, and encourage self-harm, even up to suicide.
Voluntary company guardrails were useless, emerging from an environment where platforms can deploy history’s most powerful technologies, without clear, enforceable child safety rules tailored to AI’s capabilities.
Despite these failures, preemption advocates insist tech neutrality alone is sufficient. But the opposite is true, for three important reasons.
First, blanket tech neutrality invites evasion and regulatory collapse.
When legislators are forbidden from naming technology, they write overly abstract policy that companies can easily exploit.
If America had frozen its Civil War-era legal framework during the electrification of the nation, we would not have built utility commissions, electrical safety codes, or the infrastructure that enabled a lifetime of innovation. Mandating permanent tech neutrality for AI would force regulators and courts into a losing contest. They would be forced to either stretch old legal categories beyond their intended scope or surrender enforcement altogether.
Second, tech neutrality strips law of its moral leadership.
Law does more than punish, it teaches. Citizens look to their representatives to articulate a vision for how new technologies should serve human dignity, family life and community wellbeing. If lawmakers are forbidden to name AI, and specify its risks, limits, and proper uses, the law cannot teach. A government that cannot speak clearly about the moral stakes of powerful technology cannot govern it.
Third, blanket preemption is democratically unresponsive.
Our system of self-governance assumes legislation is a continuous, iterative process. States serve as first responders to new risks and as laboratories for practical solutions long before Congress can act.
Preemption short circuits this feedback loop. The idea that Washington can “wind up” a tech neutral regime, remove the key for a decade and expect society to flourish is naïve. The consent of the governed is not a one-time event; it is an ongoing relationship.
None of these reasons are arguments against national AI standards. The country needs them, and the president has called for them. But real national standards are not a gag order on the states paired with promises of future federal action.
If you want seatbelts, you must require seatbelts. No amount of generally applicable law will deliver the clarity, accountability and public trust required for healthy technological deployment. We do not need to wait for more children to be groomed or exploited by AI systems before acting. We also shouldn’t accept the false promise that tech neutrality alone can substitute for real, democratically accountable governance.







