The widespread push for AI “guardrails” fundamentally misunderstands how to regulate artificial intelligence safely and effectively, according to new research from University of Pennsylvania and University of Notre Dame scholars.
Instead of fixed barriers that constrain innovation, policymakers should impose flexible “leashes” that allow AI to explore new domains while maintaining constant human oversight, the researchers argue in a paper published in Risk Analysis.
The metaphor matters more than it might seem. While guardrails suggest rigid, immovable barriers that keep vehicles on predetermined paths, leashes provide flexibility with accountability – much like walking a dog through a neighborhood where exploration is encouraged but a firm human grip on the handle ensures control when needed.
The Guardrail Problem
Director of the Penn Program on Regulation Cary Coglianese and University of Notre Dame computer science doctoral candidate Colton R. Crum explain that “management-based regulation (a flexible ‘leash’ strategy) will work better than a prescriptive guardrail approach, as AI is too heterogeneous and dynamic to operate within fixed lanes.”
The authors note that leashes “are flexible and adaptable – just as physical leashes used when walking a dog through a neighborhood allow for a range of movement and exploration,” while still “permit AI tools to explore new domains without regulatory barriers getting in the way.”
Traditional guardrail approaches fail because AI technology defies the regulatory assumptions that work for other industries. Unlike manufacturing chemical products or operating nuclear plants, AI applications span an unprecedented range of uses – from social media algorithms to autonomous vehicles to precision medicine – each with distinct risk profiles that evolve rapidly.
Consider the scope of this challenge. Researchers at MIT have catalogued over 1,000 distinct risks associated with AI applications. The technology encompasses everything from narrow tools designed to detect skin cancer to general-purpose foundation models that can be adapted for countless tasks limited mainly by user imagination.
Management-Based Regulation in Practice
What would leash-based AI regulation actually look like? The researchers advocate for management-based regulation that requires companies to develop internal systems for identifying, monitoring, and responding to AI risks throughout the development lifecycle.
This approach would mandate that AI firms create comprehensive risk management plans following a “plan-do-check-act” cycle. Companies would need to document their AI systems’ purposes, identify potential failures and harms, implement protective procedures, and continuously update their approaches as new risks emerge.
Unlike static guardrails, these requirements would adapt to different AI applications and evolve with the technology. A company developing chatbots would face different management requirements than one creating autonomous vehicle systems, but both would need robust internal oversight processes.
Real-World Applications
The researchers examine three categories of AI risks to illustrate their approach:
- Autonomous vehicle collisions: Current accidents highlight the need for continuous human oversight rather than assuming AI systems can operate safely without intervention
- Social media and suicide risks: Algorithm-driven content recommendations require ongoing monitoring and user feedback systems to identify harmful patterns
- Bias and discrimination: AI classification systems need continuous testing and adjustment rather than one-time fixes
For autonomous vehicles, management-based regulation might require companies to establish rigorous testing protocols, maintain detailed incident reporting systems, and ensure human operators receive proper training. The 2018 Arizona pedestrian death involving an Uber self-driving car, where both the AI and human monitor failed, exemplifies why continuous oversight matters more than preset rules.
Policy Precedents Emerging
Despite widespread calls for “guardrails,” early regulatory efforts already embrace management-based approaches. The European Union’s AI Act requires comprehensive risk management systems for high-risk AI applications, emphasizing “continuous iterative process” throughout an AI tool’s lifecycle rather than fixed compliance standards.
Similarly, the Biden administration’s Executive Order 14,110 (though now rescinded) called for management standards including ongoing monitoring and periodic human review. These precedents suggest policymakers intuitively understand that AI requires adaptive oversight, even when they use guardrail language.
The management-based model has proven effective in other contexts. States that adopted pollution prevention planning laws saw reduced toxic air emissions, while food safety planning requirements decreased foodborne illnesses. These successes demonstrate that requiring companies to develop and implement their own risk management systems can achieve better outcomes than prescriptive rules.
Technical Complexity Demands Flexibility
One insight that emerges from the research but wasn’t emphasized in initial discussions is how AI’s technical architecture makes traditional regulation particularly problematic. Neural networks consist of millions or billions of interconnected neurons processing information through complex, often opaque pathways.
Small changes in network configuration, training data, or algorithmic parameters can produce dramatically different outcomes. This sensitivity means that prescriptive rules developed for one AI system may prove irrelevant or counterproductive for another, even if both systems serve similar functions.
The researchers argue that this technical reality mirrors AI’s fundamental strength: the ability to discover novel solutions that humans might not anticipate. Rigid guardrails would essentially force AI development back toward rigid programming approaches, undermining the technology’s core advantages.
Human Oversight Challenges
The leash metaphor highlights a critical requirement often overlooked in AI safety discussions: the need for competent humans at the other end. Just as physical leashes only protect others when someone maintains a firm grip, AI management systems require ongoing human vigilance and expertise.
This creates new challenges for regulatory implementation. Companies must not only develop sophisticated risk management systems but also ensure their staff can effectively operate them. The research suggests this might require mandatory training programs similar to those required for commercial pilots when cockpit technology updates.
How can regulators ensure companies don’t simply go through the motions of compliance? The authors emphasize that management-based regulation requires robust auditing mechanisms, whether by government regulators or qualified third parties, to prevent what they term “pro forma” compliance where companies produce required documentation without meaningful risk reduction.
The stakes of getting AI regulation right continue to escalate as the technology becomes more powerful and pervasive. The researchers conclude that “AI risks can be more appropriately assessed—and societies can better safeguard themselves from these risks—through attentive reliance on regulatory leashes, not by hoping that some kind of rigid guardrails can be established.”
Discover more from NeuroEdge
Subscribe to get the latest posts sent to your email.