The European Union has stepped up to introduce the world's first AI regulation. Dubbed the Artificial Intelligence Act, the provisional agreement passed last week will open up AI models (and the companies behind them) to degrees of transparency they have thus far resisted. The new legislation may also serve as a roadmap for other countries like the US, UK, and Japan to use.
In the AI ACT, the EU designates systems that pose "significant potential harm to health, safety, fundamental rights, environment, democracy and the rule of law" as "high risk" and mandate that EU citizens have the right to file complaints about and receive explanations of those systems. How that will work in practice is yet unclear. Furthermore, as The Verge notes, "negotiators established obligations for 'high-impact' general-purpose AI (GPAI) systems that meet certain benchmarks, like risk assessments, adversarial testing, incident reports, and more. It also mandates transparency by those systems that include creating technical documents and 'detailed summaries about the content used for training'."
But not everyone is thrilled with tamping down on AI. French President Emmanuel Macron, head of the EU's second-largest economy, worried that the AI Act would stifle progress. "We can decide to regulate much faster and much stronger than our major competitors. But we will regulate things that we will no longer produce or invent. This is never a good idea," he told a French audience after the bill's passing, writes the Financial Times. He then touted France's lead within the EU on AI development, but cautioned that competitors like Britain (no longer a part of the EU) "will not have this regulation on foundational models. But above all, we are all very far behind the Chinese and the Americans.”
The American Code
While the US hasn't enacted any AI laws, it also hasn't sat idle on the issue. In late October, the Biden Administration announced an Executive Order they called the Blueprint for an AI Bill of Rights. In essence, it's own roadmap for how it wants to see AI develop, with the biggest takeaway being that any new models above a specific power and capability will have to inform the federal government of its plans. "The reaction from the A.I. developers in general was mostly neutral to lightly positive," tech reporter Casey Newton [said on The Ezra Klein Show](https://www.nytimes.com/2023/12/01/opinion/ezra-klein-podcast-casey-newton-kevin-roose.html?). "There was not a lot of blowback. But at the same time, folks in civic society were also excited that the government did have a point of view here and had done its own work."
THE VERDICT:
Macron may be right that regulation could change competitive dynamics between nations vying for AI dominance. That being said, wholly unregulated AI doesn't sound like a much better alternative. It seems that monitoring AI development and establishing guardrails now might be our last chance before the technology surpasses what we can control.
Be a smarter legal leader
Join 7,000+ subscribers getting the 4-minute monthly newsletter with fresh takes on the legal news and industry trends that matter.