Connect with us

Adam Thierer

European Union AI Regulation: A Blueprint and Cautionary Tale for U.S. Lawmakers

Published

on

Members of the group Initiative Urheberrecht (authors' rights initiative) demonstrate to demand regulation of artificial intelligence on June 16, 2023 in Berlin, Germany. The AI regulation later adopted by the European Union is a model for many U.S. lawmakers interested in consumer protection but a cautionary tale for others who say they're interested in robust innovation, experts say. (Photo by Sean Gallup/Getty Images)

The European Union’s AI Act, implemented last year, serves as a reference point for U.S. lawmakers considering consumer protections in artificial intelligence. While some view the legislation as a model, others argue it exemplifies the pitfalls of excessive regulation, potentially crippling competitiveness in the digital market.

During a congressional subcommittee hearing on May 21, Sean Heather, senior vice president for international regulatory affairs at the Chamber of Commerce, emphasized the need to avoid the fragmented AI legislation that currently characterizes U.S. law. He indicated that the EU’s approach aims to unify standards, contrasting the diverse regulatory landscape across American states.

Adam Thierer, a senior fellow at the R Street Institute, warned that American innovators could find themselves caught between stringent EU regulations and burdensome local mandates. He referred to this phenomenon as the “Brussels Effect,” suggesting that it could adversely impact the U.S.’s leadership in the global AI arena.

The EU’s comprehensive AI Act shifts responsibility to developers, mandating risk mitigation practices, technical documentation, and training summaries for review. Thierer cautioned that adopting similar policies in the U.S. might jeopardize its position as a leader in AI innovation.

Despite the EU’s regulatory efforts, not many countries are mirroring the AI Act’s stringent model. While Canada, Brazil, and Peru are working towards similar legislation, nations like the UK, Australia, and Japan prefer a less restrictive regulatory environment.

Jeff Le, a tech policy consultant, pointed out that American lawmakers express a desire to create rules independent of foreign regulations. “American constituents should be governed by American laws,” he said, underscoring the complexities involved in adopting outside standards.

Critics argue that the EU’s AI Act could stifle innovation due to its broadly written language, hampering the swift development of AI systems. Despite France and Germany ranking among the top ten global AI leaders, the U.S. maintains a substantial lead in the number of advanced AI models and research.

Peter Salib, a professor from the University of Houston Law Center, indicated that while the AI Act is a factor, other components of Europe’s regulatory landscape also contribute to its technological challenges. Factors like the General Data Protection Regulation add to the regulatory burden, prioritizing privacy but potentially hindering innovation.

Salib noted a long-standing trend in Europe that emphasizes privacy and transparency. However, he questioned whether this focus might come at the expense of technological advancement.

Stavros Gadinis, a professor at the Berkeley Center for Law and Business, cited underlying issues in Europe’s tech ecosystem, such as a less robust labor market and limited financing options, as more significant barriers to competitiveness than regulation itself.

During the hearing, Rep. Lori Trahan challenged the notion that regulation stifles tech growth, suggesting it presents a “false choice.” She highlighted America’s strengths, including substantial investments in innovation and more founder-friendly policies compared to the EU.

The EU’s law places considerable obligations on AI developers, requiring transparency, reporting, and rigorous testing. U.S. firms, already engaging in self-governance practices, might find standardized regulations beneficial but face challenges in aligning differing safety protocols.

Even in the absence of federal standards, some tech leaders, including Sam Altman of OpenAI, have shown a commitment to safety protocols. The independent Safety Committee formed by OpenAI recently released recommendations aimed at enhancing operational transparency and safeguards.

Despite changes in Altman’s stance on federal regulation, the focus remains on harnessing AI for societal benefit. Other leading firms, such as Anthropic and Google, similarly pursue responsible AI development frameworks.

Analysts suggest that while existing U.S. state laws target harmful AI practices effectively, a comprehensive federal law like the EU’s could be overly burdensome. Gadinis and Salib see limited enthusiasm in Congress for a consolidated regulatory framework, citing the state-specific, consumer-oriented nature of current laws.

As the U.S. navigates its regulatory landscape, the influence of the EU’s approach may shape future collaborations between the public and private sectors. The trend may not yield sweeping federal regulations, but increased industry self-regulation could emerge in response to public demand.