How do you solve a problem like AI?
By taking matters into their own hands, for one. That's what several states including California and Colorado are doing as federal regulation of the technology has been slow-to-non-existent.
“As California has seen with privacy, the federal government isn’t going to act, so we feel that it is critical that we step up in California and protect our own citizens,” Rebecca Bauer-Kahan, a state assembly member told the New York Times. California is pushing some 50 laws on AI that will curb discrimination by AI, copyright infringement and more.
Meanwhile, in Colorado, a law concerning “Consumer Protections in Interactions with Artificial Intelligence" was adopted last month. A similar law was proposed in Connecticut, but ultimately failed. And Utah passed the “Artificial Intelligence Amendments" in, which requires "certain disclosure obligations where a person 'uses, prompts, or otherwise causes generative artificial intelligence to interact with a person',” a white-paper by PerkinsCoie notes.
As the New York Times continues, there are some 400 laws set to regulate AI advancing through state legislatures. This flurry of activity suggests that not only is reigning in AI a pressing issue, but one that enjoys broad support among Americans. So where has Congress been? Apparently far behind: “It’s very hard to do regulations because A.I. is changing too quickly,” Senate Majority Leader Chuck Shumer said. “We didn’t want to rush this.”
"Overall, AI regulations at the state level remain unsettled, with some states pushing forward with regulatory regimes tailored toward their specific concerns," PerkinsCoie explains. "In this context, Colorado's approach to AI regulation stands out as the most comprehensive and risk-based effort in the United States, resembling the E.U. AI Act."
EU AI
While the United States has failed to reach a unified federal policy on AI, the White House did announce an Executive Order in October of 2023 aimed at establishing standards for security and protections for privacy and civil rights.
However, this vacuum in American leadership has led the European Union to step in with the EU AI Act. The act's most influential rules govern how transparent so-called "high-risk" AI systems must be. Furthermore, the law "restricts governments' use of real-time biometric surveillance in public spaces to cases of certain crimes," and bans "the use of artificial intelligence in social scoring, predictive policing and untargeted scraping of facial images from the internet," writes Reuters.
The Verdict
It's clear that failure to regulate AI now—even if that just means safety rails—will mean it's too late in the future. The technology is fast-moving, and could easily evolve beyond our ability to contain it. As such, individual states may prove the necessary stop-gap (or catalyst) to get some guidelines in place now for this technology's development.
Be a smarter legal leader
Join 7,000+ subscribers getting the 4-minute monthly newsletter with fresh takes on the legal news and industry trends that matter.