For the second year in a row, Connecticut legislators decided not to regulate AI for businesses, but they did pass a law that criminalizes deepfake revenge porn and provides education funding for artificial intelligence.

A bill that would have required companies to publicly disclose their AI use was passed by the Senate but was not taken up by the House after Gov. Ned Lamont threatened to veto the measure. He expressed concern that it would damage Connecticut’s technology sector.

Here’s what to know about this session’s AI bills, their status and how they compare to other states’ laws.

What AI bills became law this year?

Most of the AI legislation signed into law this session came in the budget bill, which authorized $500,000 for the Connecticut Online AI academy, $25,000 for AI training at the Boys and Girls Club of Milford and $75,000 for three Boys and Girls Clubs’ AI training pilots in the state.

It also made disseminating “synthetically created” revenge porn a crime as of Oct. 1, 2025. The law does not directly mention AI-created images, but lawmakers are aiming to address the problem of generative-AI revenge porn deepfakes. The law could also apply to other image creation techniques. It criminalizes the sharing of such images without the depicted person’s consent.

And new privacy legislation in Senate Bill 1295 requires collectors of sensitive data to notify consumers if their personal data is being used to train large language models.

It also gives consumers the right to opt out of automated systems, which could include AI, to prevent them from using their personal data to make significant decisions regarding housing, insurance, health care, education, criminal justice and employment. It gives consumers the right to question decisions made with automated decision systems, and if it relates to housing, to correct any inaccurate information before the decision is reevaluated.

What didn’t pass? 

Senate Bill 2, a measure that would have required companies to publicly disclose AI use to consumers, passed the Senate but was not brought up for a vote in the House by the end of the session. Last-minute amendments watered down initial requirements to produce annual impact assessments and mitigate algorithmic discrimination in an attempt to make the bill more palatable to the governor.

This is the second year such AI business regulation has made it through the Senate, and just like last year, House Speaker Matt Ritter, D-Hartford, declined to call it to the House floor after Lamont threatened to veto it.

Another piece of legislation, Senate Bill 1484, which sought to prevent algorithmic discrimination against employees and require disclosure of AI’s role in employee assessment, was passed by the Judiciary Committee but went no further in the legislative process.

Why did Lamont threaten to veto S.B. 2?

This is the third year Sen. James Maroney, D-Milford, has authored a bill on AI. The first one, which required the disclosure of AI use by the Connecticut government, sailed through the Senate and House before being signed into law in 2023 by Lamont.

While Maroney got key senate leaders, Senate President Pro Tem Martin M. Looney,  D-New Haven, and Senate Majority Leader Bob Duff, D-Norwalk, on board for S.B. 2, though, he failed to convince Lamont.

The governor has dissuaded legislators from passing AI regulation that he thinks could risk scaring business away from Connecticut.

“The governor remains concerned that this is a fast-moving space and that we need to make sure we do this right and don’t stymie innovation,” Lamont’s office said in 2024 about a similar bill.

This year, Lamont’s chief innovation officer Dan O’Keefe said S.B. 2 would send “a message that says, ‘Because we don’t understand this yet, you can’t innovate here. You can’t take risks here.’”

O’Keefe also expressed concern that passing such legislation in Connecticut was “too early” and that a state with 1% of the U.S. population should not be among the first few in the country to pass AI regulation.

Sens. John Kissel, R-Enfield, Paul Cicarella, R-North Haven, and Melissa Osborne, D-Simsbury, who spoke in opposition to the bill when it came up for its vote in the Judiciary Committee, echoed O’Keefe’s concerns. Kissel also brought up the Trump administration’s support for AI technology as a reason to not pass a regulatory bill now.

What are AI laws in other states? 

Several states have passed AI legislation in the last couple of years. Colorado became the first state to require that companies disclose their artificial intelligence systems and to codify discrimination by AI of protected groups as illegal in 2024, similar to the goals of Connecticut’s S.B. 2.

Utah passed legislation in 2024 that requires proactive disclosure of AI use in regulated occupations. California and Texas have also passed private-sector regulatory laws.

Connecticut joins New Jersey, New Hampshire, Massachusetts and others in criminalizing deepfake revenge porn. New Hampshire law even goes a step further to prohibit deepfakes of any kind that cause reputational harm, including deepfakes of political candidates.

States have had various aspects of AI legislation introduced, but it has failed to pass at different points in the system. A law with goals similar to Connecticut’s S.B. 2 passed in Virginia but was vetoed by Gov. Glenn Youngkin.

What about federal AI law? 

Trump signed the Take It Down Act in May, which criminalizes deepfake porn federally.

The federal budget reconciliation bill, also known as the ‘big beautiful bill’ that passed earlier this month, initially contained a Trump-backed ban on states enforcing and enacting any AI laws for 10 years. Hours before the bill passed, senators voted 99-1 to remove the ban language from the bill.

It remains up to the states to chart their own course.

CT Mirror reporter Keith M. Phaneuf contributed to this story.

Angela Eichhorst is a reporter for the Connecticut Mirror. Copyright 2025 @ CT Mirror (ctmirror.org).