Skip to content

These bills would regulate high-risk artificial intelligence use in Virginia

Virginia lawmakers will weigh legislation to shape policy on AI authentication, developer responsibilities

Table of Contents

Emerging technologies, especially artificial intelligence, have generated public safety and privacy concerns that Virginia lawmakers could address within the coming weeks.

Del. Michelle Maldonado, D-Manassas, and Sen. Lashrecse Aird, D-Petersburg, are sponsoring legislation — backed by the Joint Commission on Technology and Science — to regulate the high-risk artificial intelligence systems that are becoming more commonly used in critical infrastructure and industries, from health care and law enforcement, to education and employment.

“All types of entities are using artificial intelligence,” said Aird, adding that the proposed legislation aims to “prevent harm from being done to individuals in the most significant of situations.”

Artificial intelligence has been used to improve security, enhance health care, improve communications and connect people. After a rare neurological disorder affected her ability to speak, former state senator and U.S. House representative Jennifer Wexton used an AI model of her voice to talk to her colleagues. It was the first time “a voice cloned by AI was used on the House floor,” NPR reported. 

However, the technology has also been misused in politics. AI mimicked President Joe Biden’s voice in phone calls to New Hampshire voters to falsely suggest that voting in the state’s presidential primary would preclude them from casting ballots in the November general election. In housing, tenant screening algorithms have prevented some applicants from obtaining housing. 

“I think it’s really critical for us to create guardrails and frameworks that are flexible and breathable, so that we don’t stifle innovation and creativity,” Maldonado said. “But we also (should) keep in mind what it means when we have this kind of technology that can take so much data and put it into training datasets for large language models and other things without us even knowing. We should understand how our data is being used.” 

Under Maldonado’s House Bill 2121, AI developers would be required to document their technology’s origin and history of development and make that information publicly accessible.

Compared to last year’s legislation, Maldonado said HB 2121 has more input from civil society, including businesses and community organizations, and has a clear framework for the attorney general’s compliance and enforcement responsibilities. The legislation would also outline consistent definitions related to AI technology used across the country. Responsibilities for distributors and integrators of AI systems, instead of only developers and deployers, are also part of the new legislation. 

Maldonado is also carrying HB 2094, which would create requirements for developing, deploying and using high-risk artificial intelligence systems and civil penalties for noncompliance. HB2250 would allow consumers to authorize a company to opt out of allowing others to use their personal information.

Aird’s Senate Bill 1214, which has not been filed yet, would create requirements for establishing high-risk artificial systems used by public government bodies such as local governments, colleges and universities and regional boards. Virginia’s chief information officer would be responsible for creating policies and procedures regarding AI systems that employ high-risk artificial intelligence systems that are consistent with the bill’s requirements. 

The CIO would also be responsible for creating a work group to examine the impact on and the ability of local governments to comply with the requirements of the bill by July 1.

Virginia’s progress

Efforts by lawmakers and Gov. Glenn Youngkin’s administration have helped to position the commonwealth to address artificial intelligence.

Last year, lawmakers took up a couple dozen artificial intelligence bills and resolutions including Aird’s SB 487, directing state researchers to conduct an analysis of the use of artificial intelligence by public bodies in the commonwealth, and the creation of a Commission on Artificial Intelligence. 

The commission later accepted recommendations for ​​lawmakers to codify the Virginia Information Technology Agency’s AI utilization policy, establish an advisory committee on AI, regulate AI use by private and public entities and strengthen data privacy regulations through an opt-in mechanism.

“We’re just hoping that we can also add Virginia to the list of states that are being responsible and making sure we are responding to this growing technology in a way that is thoughtful, intentional and offers clarity and transparency for how people will be affected,” Aird said.

This session, lawmakers will also take up a bill sponsored by Senate Majority Leader Scott Surovell, D-Fairfax County, that would require political ads to disclose the use of artificial intelligence.

In the House, Maldonado said she’s been working with her Virginia colleagues and lawmakers from other states including Colorado and Connecticut over the past few years to craft legislation that stemmed from a multi-state policymaker working group to “minimize patchwork legislation around the nation,” as the country waits for federal policy action on artificial intelligence.

She said one of Virginia’s advantages is that there is a dedicated group, the Joint Commission on Technology and Science, to study emerging technologies.

Last year, the governor signed an executive order to establish standards and guidelines for the use of artificial intelligence. The directive also created a task force to support policymakers in developing guardrails for the responsible use of the technology. The task force is tasked with offering input to state agencies on using AI to enhance their operations while ensuring that the AI solutions they adopt are cost-effective. 

While the directive was helpful, Maldonado said it’s important for lawmakers to put laws into place.

“I’m looking for solutions that are sustainable and what makes it sustainable is putting it into legislation, which then tracks to regulatory requirements that don’t expire depending on who’s in office.” 

Maldonado said she is hopeful the legislature will pass her bill but acknowledged that “sometimes it takes a couple of times to bring something forward before it will be passed, particularly if people feel uncomfortable with something new and something very complex.” 

For those who may believe regulation is needed for AI but that the country should let the private market figure it out, Maldonado warned that the same approach was taken with social media “and that is not the approach we should be taking here.”

Last year, the European Union passed the first significant regulatory measure to ensure that AI is safe and trustworthy by developing the EU Artificial Intelligence Act, which regulates the development and use of AI in 27 European countries.

“We are at a moment again where we can decide whether we take action or we don’t, and my position and the position of many people is that we must — in the interest of our own nation, our own national security and the protection of data of ordinary citizens — have a position on this,” Maldonado said.


This article first appeared on Virginia Mercury and is republished here with permission. Virginia Mercury is part of States Newsroom, a network of news bureaus supported by grants and a coalition of donors as a 501c(3) public charity. Virginia Mercury maintains editorial independence.