Why AI Regulation Must Focus on Outcomes, Not Capabilities

Regulating AI like nuclear weapons misunderstands both the technology and the risk. We need a smarter approach.

Why AI Regulation Must Focus on Outcomes, Not Capabilities

The AI regulation debate has settled into a predictable pattern. One camp warns of existential risk and calls for strict capability limits. The other dismisses concerns and demands unfettered development. Both sides are wrong.

The productive middle ground isn’t compromise—it’s a fundamentally different approach. We need to regulate AI outcomes, not capabilities.

The Capability Trap

Current regulatory proposals focus heavily on capability thresholds. The EU’s AI Act categorizes systems by risk level based on what they can do. Proposed US frameworks would require safety evaluations for models above certain parameter counts. Even voluntary commitments from AI companies emphasize testing model capabilities before deployment.

This seems intuitive. Powerful capabilities create risks, so regulate powerful systems. But this logic has three fatal flaws.

First, capability is a terrible proxy for harm. A highly capable AI system used to accelerate drug discovery creates enormous positive value. The same system, given different objectives and data, could potentially be misused. The capability itself isn’t the problem—the application is.

Second, capability thresholds are arbitrary and rapidly obsolete. What parameter count indicates “high risk”? Current proposals suggest 10 billion or 100 billion parameters, but these numbers have no scientific basis. GPT-2 was considered too dangerous to release in 2019. By 2023, it was running on smartphones. The capability frontier moves faster than regulatory processes.

Third, focusing on capabilities creates perverse incentives. If large models face onerous requirements, developers will optimize for staying just below thresholds—creating systems that are less capable but no safer. Or they’ll use techniques like model ensembling and specialized architectures to achieve similar capabilities through different means. The regulation drives technological decisions without improving safety.

What Outcome-Based Regulation Looks Like

An outcome-focused framework asks different questions. Not “How capable is this system?” but “What harm could this deployment cause, and who bears the cost?”

This shifts attention to where it matters: the actual use cases and their societal impacts.

Consider facial recognition technology. The capability itself—matching faces to databases—is neutral. But the outcomes depend entirely on context:

  • Airport security facial recognition that misidentifies travelers causes inconvenience and delays.
  • Law enforcement facial recognition that misidentifies suspects can destroy lives through wrongful arrest and prosecution.
  • Surveillance facial recognition deployed by authoritarian regimes enables political repression.

Capability-based regulation would treat all facial recognition similarly. Outcome-based regulation would apply vastly different requirements based on deployment context and potential harms.

Concrete examples of outcome-focused regulation:

Liability frameworks. Make deployers legally liable for harms caused by AI systems they operate. This creates market incentives for safe deployment without prescribing technical approaches. A company using AI for loan decisions would face liability for discriminatory outcomes, incentivizing them to rigorously test and monitor for bias.

Mandatory impact assessments. Require organizations deploying high-stakes AI systems to publicly document potential harms, affected populations, and mitigation strategies. This doesn’t restrict capability development—it ensures deployment decisions are transparent and considered.

Outcome-specific bans. Prohibit specific harmful applications, not general-purpose capabilities. Ban AI-generated deepfakes used for fraud or harassment, not the generative models themselves. Ban autonomous weapons systems that make kill decisions without human oversight, not the computer vision and reinforcement learning technologies they might use.

Redress mechanisms. Establish clear pathways for people harmed by AI systems to seek compensation and correction. If an AI system denies your loan application, you should have the right to understand why and challenge errors. If it misidentifies you in a criminal investigation, you need effective recourse.

Why This Approach Works Better

Outcome regulation has several advantages over capability regulation:

It’s technology-agnostic. The rules don’t need updating every time a new model architecture emerges. They apply equally to current systems and future innovations we can’t yet imagine.

It aligns with how we regulate other technologies. We don’t regulate cars by horsepower limits. We regulate outcomes: speed limits, safety testing requirements, liability for crashes. The same principle should apply to AI.

It preserves innovation in beneficial applications. Researchers developing AI for climate modeling, drug discovery, or education aren’t burdened by requirements designed for high-risk deployments. The regulatory overhead scales with actual risk.

It creates better incentives. Companies have market incentives to develop more capable systems. Outcome regulation adds incentives to develop safer systems and deploy them responsibly. Capability regulation just penalizes capability.

The Hard Questions

Outcome-based regulation isn’t simple. It requires answering difficult questions:

Who defines harmful outcomes? There’s broad consensus that discriminatory loan decisions are harmful, but what about AI systems that optimize engagement on social platforms? Some see this as valuable personalization. Others see it as addictive manipulation. These debates are necessary, but they’re social and political choices, not technical ones.

How do we handle emergent harms? Some AI risks only become apparent at scale. Early testing might not reveal problems that emerge when millions of users interact with a system. Outcome regulation needs mechanisms for rapid response when unexpected harms surface.

What about national security concerns? Some AI capabilities—particularly in the autonomous weapons domain—raise special concerns about international competition and security. These genuinely might require capability-level restrictions, but they’re the exception, not the model for general AI regulation.

These questions don’t have simple answers. But they’re the right questions to be debating.

The Path Forward

We’re at a crucial juncture. The regulatory frameworks established in the next few years will shape AI development for decades.

Getting this wrong has high costs. Overly restrictive capability-based regulation could push innovation offshore, concentrate power in the hands of incumbents large enough to navigate complex requirements, and limit beneficial applications while doing little to prevent harms.

Getting it right requires moving beyond the current debate’s false dichotomy. We don’t have to choose between unfettered development and technology-stifling restrictions.

Outcome-based regulation offers a better path: maximize beneficial innovation while creating accountability for harms. Focus on what AI systems do to people and society, not on how many parameters they have.

This isn’t easier than capability regulation. It requires ongoing vigilance, clear liability frameworks, and willingness to update rules as we learn. But it’s regulation grounded in reality rather than arbitrary technical thresholds.

The emerging technologies of 2026 won’t be the last transformative AI systems we see. The regulatory approach we establish now needs to work not just for today’s models, but for innovations we can’t yet imagine.

Outcome-focused regulation is the only framework flexible and robust enough to meet that challenge.