At the Cato Institute, Jennifer Huddleston discusses the future regulation of artificial intelligence and what role differing laws among the states could play in the law’s evolution. She writes:
While Congress debates what, if any, actions are needed around artificial intelligence (AI), many states have passed or considered their own legislation. This did not start in 2024, but it certainly accelerated, with at least 40 states considering AI legislation. Such a trend is not unique to AI, but certain actions at a state level could be particularly disruptive to the development of this technology. In some cases, states could also show the many beneficial applications of the technology, well beyond popular services such as ChatGPT.
An Overview of AI Legislation at a State Level in 2024
As of August 2024, 31 states have passed some form of AI legislation. However, what AI legislation seeks to regulate varies widely among the states. For example, at least 22 have passed laws regulating the use of deepfake images, usually in the scope of sexual or election-related deepfakes, while 11 states have passed laws requiring that corporations disclose the use of AI or collection of data for AI model training in some contexts. States are also exploring how the government can use AI. Concerningly, Colorado has passed a significant regulatory regime for many aspects of AI, while California continues to consider such a regime.
Some states are pursuing a less regulatory approach to AI regulation. For example, 22 states have passed laws creating some form of a task force or advisory council to study how state agencies can use (or regulate) AI. Others are focused on ensuring civil liberties are protected in the AI age, such as the 12 states that have passed laws restricting law enforcement from using facial recognition technology or other AI-assisted algorithms.
Of course, not all state legislation fits these models, and some states have focused on specific aspects, such as personal image more generally (for example, the Ensuring Likeness Voice and Image Security Act in Tennessee) or AI deployment in certain contexts.
State AI Regulations Focused on Election-Related Use
Many lawmakers are concerned about potential misinformation on social media or deepfakes of candidates that they feel will be amplified given the access to AI technology. As a result, some have sought to pass state laws regulating deepfakes depicting political candidates or using AI to generate or spread misinformation related to elections. There is some variance. For example, some states ban such acts altogether, while others only require a disclaimer of AI use. But for the most part, existing state law is fairly consistent about the harms that legislators are trying to address and would likely be sufficient to address harmful-use cases.
The list of states that have passed such laws is long and includes both red and blue states. Alabama, Arizona, California, Florida, Idaho, Indiana, Michigan, Minnesota, Mississippi, New Mexico, New York, Oregon, Texas, Utah, Washington, and Wisconsin have all passed election-related AI legislation.
While these laws may be well-intentioned to ensure the public has reliable election information, they can have serious consequences, particularly for speech. Severely restricting the use of AI, even if only within the election context, risks violating our First Amendment rights. For example, even something that may appear simple—like a ban on the use of AI in election media—can have far-reaching consequences. Does this mean that a candidate can’t use an AI-generated image in the background of an ad? Would it be illegal for a junior staffer to use ChatGPT to help write a press release? Without proper guidance, even the simplest of state laws will be overbroad and impact the legitimate and beneficial use of this technology.
Furthermore, such laws may not even be necessary. There is no denying some of the dangerous things that AI has been used for, including potentially in an election context, but research suggests that the threat from AI may be overblown. AI might be able to generate content faster than a human, but that doesn’t mean it does a better job of creating fakes. Additionally, tech companies are very good at spotting and removing deepfakes.
Read more here.
If you’re willing to fight for Main Street America, click here to sign up for the Richardcyoung.com free weekly email.