Connecticut Lawmakers Take on AI Regulation in New Bill 

Share

TwitterFacebookCopy LinkPrintEmail

HARTFORD — Health Care, porn and prejudice are key to a bill in the state legislature that aims to regulate how artificial intelligence can be used in Connecticut. 

Though State Sen. James Maroney, D-Milford, the main author of the bill, said earlier this month that he viewed AI as a “net positive” program capable of eliminating repetitive tasks, “there’s not zero risk, and that’s something that we’ve seen.” 

Several key elements of the legislation, such as criminalizing nonconsensual AI-generated sexual content and “deep fakes,” devising training programs for AI utilization, and mandating corporate measures to prevent discrimination, came from the suggestions of a task force created last year.

Vahid Behzadan, an assistant professor of data science at the University of New Haven and a member of the state’s Artificial Intelligence Task Force, told CT Examiner on Thursday that the term “artificial intelligence” has been around since the 1950s, but new programs like ChatGPT have made the technology widely available to the public for the first time. 

“There are many interesting, many powerful instances of AI models that were only available and understandable and usable by AI engineers and AI researchers for many years,” he said. “What OpenAI did was, they essentially made one of such technologies generally and publicly available, and that allowed the general public to experiment with and experience the power of where we are now in AI.” 

Behzadan said the public interest in AI has caused the field to advance rapidly.

“It’s going to become more and more ubiquitous, and it’s going to be a prevailing technology. It doesn’t seem to be a hype that’s going to die down anytime soon,” he said. “The investments are growing and it’s for a very simple reason — it’s because AI is the problem of solving problems.” 

The bill would also create a Connecticut Citizen’s AI Academy at Charter Oak State College, aimed at providing workforce training in AI applications. Additionally, it mandates the Board of Regents to develop specific certificate programs related to artificial intelligence and calls for the integration of AI into the state’s workforce training initiatives.

Ron Harichandran, dean of the Tagliatela College of Engineering at the University of New Haven, told CT Examiner on Thursday that workers should be trained on how to use AI to improve their work performance, but warned about the security risks that come with the technology as well. 

“In today’s world, it’s a huge disruptor,” he said. “There are tasks that are going to be more and more done with AI help and, therefore, need less and less people to do it.”

Over the next 10 years, Maroney noted, AI is projected to eliminate 85 million jobs and create 97 million jobs. 

“[AI] will open up opportunities for solving problems faster, increasing efficiency, improving the economy, because of the speed of evolution. So those are all good things that are eventually going to come out of AI,” Harichandran said. “But just like any technology, it will also be used for bad things. Warfare is going to have AI as a piece of it, and crime is going to have AI as a piece of it. So all of those aspects that we don’t like are also going to become more efficient and improved.” 

Harichandran said training workers in AI would help promote equity across socioeconomic lines.

“The wealthier segments of the population may be in a better position to quickly adopt these tools and, therefore, get ahead in the workplace, compared to the ones who are not as fortunate,” said Harichandran. 

Maroney also referenced a McKinsey report published in December that found AI had the potential to increase the wealth gap between Black and white households by $43 million. 

“An important piece of this bill is to make sure that we’re training everyone and that every citizen in Connecticut has the opportunity to benefit from AI and what it’ll bring,” Maroney said. 

Harichandran noted that AI could promote inequities through algorithms that include implicit biases. 

Last year, the U.S. Commission on Civil Rights released a report outlining examples of how AI could be a source of bias on issues ranging from child welfare placements and housing to criminal court sentencing, prompting discussions in Connecticut about AI-generated or propagated biases. 

In response, Maroney’s bill mandates that companies conduct “impact assessments” for each new iteration of AI, explicitly outlining potential algorithmic discrimination risks. Additionally, organizations utilizing AI for decision-making are required to disclose those details to the affected individuals. The proposal also requires the testing of all “generative AI” — artificial intelligence that produces content — before it’s publicly published.

But Harichandran acknowledged that there needed to be a balance in regulations. 

“We don’t want to prevent innovation and the adoption of these tools. At the same time, we want to safeguard citizens and organizations from the judgment effects that may occur as these things are more widely used,” he said. 

Healthcare groups, in particular, had varying opinions on how AI regulations in Connecticut could affect hospitals and patients. 

Dr. Barry Stein, the chief clinical innovation officer at Hartford Healthcare, said the company was supportive of the legislation. In written testimony, Stein said Hartford Healthcare was already using AI to predict the outcomes of certain surgeries, as an early warning index for patients and in predicting the length of hospital stays. 

“We’ve recognized that AI has the potential to fuel massive transformation in health care, improving access, affordability, equity, as well as quality and safety for our patients, and it’s been important for us to do this in a very trustworthy and responsible way,” Stein told legislators on Thursday. 

But the Connecticut Hospital Association opposed the bill and asked that healthcare organizations be exempt. In testimony, the hospital group acknowledged that it uses “predictive technologies,” for managing and refilling medications and scheduling, but insisted that the bill was “unworkable” for health care. 

State Sen. Tony Hwang, R-Fairfield, praised the development of the bill, but warned against making the attorney general the single arbiter and preventing individuals from privately suing companies. 

“To put all of that power to making that determination to one office is something that I’m extremely concerned about,” Hwang said. 

Labor union representatives agreed with Hwang. 

Connecticut AFL-CIO President Ed Hawthorne said the bill gives too much power to developers because it assumes that technology companies have done their due diligence to protect individuals from harm. He asked that a workers’ representative be placed on the proposed Artificial Intelligence Advisory Council.

The bill also criminalizes the sharing of nonconsensual naked images of people online, and makes it illegal to create a “deep fake” to mislead people during elections. 

“Not only is it misinformation, but I think it deeply pains and impacts people’s livelihood and their ability to be able to function. Reputations can be ruined overnight,” Hwang said. 

At a news conference last week, Maroney cited a case at a New Jersey high school, in which male students ran female students’ yearbook photos through a nudification software and shared them on social media. 

“What happens when you no longer need to send that photo? You can just use nudification software to then blackmail someone into what they call sextortion. There were over 3,000 incidents of that across the country last year,” Maroney said.  

Regarding election deep fakes, Maroney mentioned a fake robocall sent to New Hampshire residents discouraging them from voting during the primary in January. The robocall was doctored to sound like it was coming from President Joe Biden. 

“This year, I think over 75 percent of the world’s democracies will have an election, and we’re all concerned about deep fakes for elections,” said Maroney, adding that the provision was based on bills in Minnesota and Michigan. 

Hwang agreed, noting that AI is a human-fed creation and warning that errors could come from it. 

“It ultimately comes down to algorithms that are inputted by human beings. Knowledge that is put into a process that gets repeated over and over and over again. And I put caution to that, because part of that is that human beings are putting in that data,” he said. 


Emilia Otte

Emilia Otte covers health and education for the Connecticut Examiner. In 2022 Otte was awarded "Rookie of the Year," by the New England Newspaper & Press Association.

e.otte@ctexaminer.com