Freedom First: The Constitutional Case Against AI Overregulation

Before we build the regulatory architecture for artificial intelligence, we had better ask a question every first-year law student learns to ask: who does this framework actually serve?

Marc Friedman
Marc Friedman
PUBLISHED IN Regulation - 7 MINS - Apr 13, 2026
Freedom First: The Constitutional Case Against AI Overregulation

Artificial intelligence is not a coming disruption. It is a present-tense legal and civic reality that’s already embedded in medical diagnostics, financial underwriting, criminal risk assessment, and in content moderation systems that determine which voices get amplified and which get suppressed.

Courts are beginning to grapple with it. Legislatures are scrambling. And in the scramble, the instinct is always the same: regulate first, understand later.

As someone who has spent nearly five decades watching government agencies expand their authority well beyond their statutory mandates, I want to offer a different narrative which is to slow down, show who benefits, and follow the power.

In my experience, the call for urgent regulatory action almost always benefits the entity large enough to absorb compliance costs and sophisticated enough to help write the rules.

The concerns driving the push for AI regulation are not fabricated. Systems do produce biased outcomes. They can enable mass surveillance. Data can be weaponized for fraud and manipulation at scale.

These are legally cognizable harms, and I take them seriously. But the existence of a harm does not automatically justify any particular remedy, especially one that concentrates authority in government agencies or establishes barriers to entry that entrench the very corporations we should be scrutinizing.

Regulatory Capture is Not a Theory. It’s a Pattern

Any attorney who has spent time in administrative law knows how this works. A new technology emerges. Public concern grows. Congress or a regulatory agency moves to establish oversight. Industry lobbyists flood the comment period.

The final rules reflect, in no small part, the preferences of the largest players in the market, the ones with the legal teams, government affairs offices and  prior relationships with regulatory agency staff. The rules become a moat. Small competitors cannot clear them. New entrants are priced out before they begin.

A liberty-grounded approach to AI governance starts not with the question of what regulations to write, but with what the law already provides. Fraud statutes apply to AI-generated deception. Consumer protection law reaches manipulative algorithmic systems. Civil rights law prohibits discriminatory outcomes regardless of whether a human or a machine produced them.

The legal toolkits exist. The question becomes whether we have the will to use them precisely, rather than reaching for blunt new authority as a substitute for rigorous enforcement of what we already have.

The First Amendment Problem Nobody is Talking About

There is a dimension of AI regulation that receives far too little attention in policy circles, and that is the First Amendment. AI systems generate speech. They curate conversations. They can make editorial decisions at a scale no human editor ever could.

Regulatory frameworks that require government pre-approval of AI outputs, or that mandate specific content standards enforced by federal agencies, raise serious constitutional questions that proponents of aggressive oversight have been remarkably reluctant to engage.

I am not arguing that the First Amendment forecloses all AI regulation. It does not. But any honest legal analysis has to account for the fact that speech-related restrictions face heightened scrutiny, and that the government's track record of defining "harmful content" in ways that serve official interests rather than public ones is not reassuring.

The constitutional framework was designed precisely for moments like this: when the temptation to trade liberty for the promise of safety is at its most persuasive.

What Targeted Accountability Actually Looks Like

The legally sound approach to AI harm is the same approach that has always worked best in a system grounded in individual rights: identify the specific harm and  responsible party, apply the applicable law, and impose proportionate consequences.

If an AI hiring tool produces discriminatory outcomes, existing employment discrimination law reaches that. If an AI system is used to defraud consumers, fraud law applies. If a deepfake is used to defame someone, defamation law provides a cause of action.

What the law does not require, and what self-governance principles affirmatively resist, is a prior restraint model in which development itself must be licensed, approved, or pre-certified by a government body before it can reach the public. That approach substitutes bureaucratic judgment for market accountability and treats innovation as inherently suspect until proven otherwise.

Decentralization as a Legal and Civic Principle

Lawyers who care about liberty have always understood that concentrated power is itself a form of legal risk. The constitutional structure of the United States was built on that premise: distribute authority, create competing interests, prevent any single actor from achieving dominance over the mechanisms of civic life.

An AI landscape dominated by two or three vertically integrated platforms, operating under a regulatory framework they helped draft, is not a safe AI landscape. It is a captured one. The diversity of developers, the openness of foundational tools, the ability of independent researchers to audit and challenge dominant systems, these are not nice features of a competitive market. They are structural safeguards against the kind of concentrated power that historically precedes serious abuse.

Education and the Informed Citizen Standard

There is a legal concept worth borrowing for this conversation which is the reasonably informed person standard. Courts use variations of it across contract law, tort law, and constitutional doctrine because the law has always understood that the capacity to exercise rights meaningfully depends on the capacity to understand one's situation clearly.

Applied to AI, this means that digital literacy is not a soft priority. It is a prerequisite for meaningful self-governance in a world where algorithmic systems increasingly mediate economic opportunity, civic participation, and access to information.

People who understand how these systems work are harder to manipulate, harder to exploit, and better equipped to hold developers and deployers legally and politically accountable. No regulatory framework, however comprehensive, produces that outcome. Education does.

An informed citizenry is not a substitute for good law. But it is the precondition for good law actually working as intended.

The Case for Principled Restraint

I want to be precise about what I am and am not arguing. I am not arguing for a lawless AI ecosystem. The law exists for good reasons, and its application to new technologies is both appropriate and necessary. What I am arguing against is the reflexive expansion of regulatory authority in the absence of clear evidence that such authority will be used narrowly, accountably, and in ways that genuinely protect individual rights rather than institutional interests.

The burden of proof in this analysis runs in a specific direction. Those who seek to restrict freedom, even in the name of safety, bear the burden of demonstrating that the restriction is necessary, narrowly tailored, and unlikely to produce the concentrations of power it purports to prevent. That is a high bar. It should be a high bar. The stakes are high enough to require it.

Artificial intelligence will shape the conditions of human freedom for generations. The legal and policy choices made in the next several years will establish precedents that are genuinely difficult to reverse.

This is not an argument for paralysis. It is an argument for precision, for humility about the limits of centralized knowledge, and for a principled commitment to the proposition that liberty is not the obstacle to getting AI right.

This is the only reliable foundation for getting it right at all.