Credit: (AP Photo/Michael Dwyer, File)FILE – The OpenAI logo is seen on a mobile phone in front of a computer screen displaying the ChatGPT home screen, March 17, 2023, in Boston.

People are telling one New Jersey lawmaker that they are concerned about what AI is doing and they are worried about the future.

“I think we need to get ahead of it,” Assemblyman Andrew Macurdy (D-Union) told NJ Spotlight News. “If we come up with a sensible regulatory structure here in New Jersey, it very well could take hold in other states.”

Macurdy now plans to introduce three bills, a legislative package he calls “common sense” and necessary as the amount of AI-generated content available via social media continues to grow.

“There is just a real concern, and I think it will only grow as a concern when people are looking at content online about whether it is real or whether it is generated by artificial intelligence,” Macurdy said. “I think that the ability to tell reality versus what is fake is really important and I think we need some guardrails around that, because it’s only going to proliferate.”

New Jersey would become one of the first states to require photos, videos and audio generated by artificial intelligence to carry an explicit label, under one of the bills Macurdy plans to introduce Thursday. The other bills would prevent the unauthorized use of a person’s image in generative AI and hold developers liable for AI used to commit certain crimes.

In a Stockton Poll released earlier this week, New Jersey registered voters said they are leery about artificial intelligence in general and the increase in the growth of the data centers needed to create the computing power for AI. In the poll, 41% said the increased use of AI will make their lives worse, up from 36% who thought that two years ago. A little more than a quarter said AI will make their lives better. And 56% of registered voters surveyed would support a ban on data centers in their towns and nearly half said the centers do more harm than good.

Macurdy is a member of the Assembly Science, Innovation and Technology Committee, which recently advanced a seven-bill package seeking to regulate AI in several areas, including political advertisements, customer service and chatbots. Those bills are still awaiting action by the full Assembly.

New Jersey lags behind a number of states in enacting AI regulations. The U.S. AI Law Tracker from the Orrick law firm shows states have put 224 laws in place. California leads, with 29 laws, most notably among them are two that will make it the first state to require all AI audios and visuals to contain a disclosure of its origin and timestamp embedded in the content and require content providers to offer a free, publicly accessible tool that would show people whether content was generated by AI.

New Jersey so far has passed three laws, the most recent of which criminalized what are known as deepfakes – realistic AI-generated images and videos of people — if they are used for harassment, extortion or some other unlawful purpose.

One of Macurdy’s proposed bills would move beyond the California law that takes effect in August and require, in addition to embedded information about AI generation, a disclosure that is “clear, conspicuous, appropriate for the medium of the content and understandable to a reasonable person” that the content was created using AI. The “AI Image Disclosure Act” also would require social media companies to generate such explicit disclaimers based on the embedded information in content on their platforms.

A review of state AI transparency laws by NJ Spotlight News found only one that requires clear disclosure that AI was used. That New York law applies to commercial advertisements. Like California, Utah enacted a law requiring AI-created content include disclosure embedded into the content, as well as a tool to allow the public to then see that information.

Another bill proposed bill by Macurdy, the “AI Likeness Protection Act,” would prohibit the distribution of a realistic representation of a person using text, images, video and audio unless the person approved of the AI-generated content. A person could sue anyone who used AI to create such content without consent.

“I think you already see it, and it’s just going to happen with increasing frequency and increasing accuracy, accuracy of what people look like,” he said. “There’s just going to be content out there of you, whether you’re a public figure or not, doing things that you didn’t do. And I think there’s something deeply disturbing about that and I think it can lead to all sorts of privacy violations as well as distortions of reality.”

Several other states have codified that an individual’s right to the use of his voice and likeness extend to AI and provide an explicit right to bring a civil suit when that right is violated.

Known as the “AI Accountability Act,” Macurdy’s third proposed bill would establish civil penalties for the developers of AI platforms that are used to commit certain crimes, including extortion, theft by deception or the creation of child sex abuse images. Each violation would carry a penalty of $20,000 and the attorney general’s office would be charged with enforcement. Making sure that technology is not involved in criminal actions is a “fair burden” to put on developers, he said.

This story is made possible in part by the Corporation for Public Broadcasting, a private corporation funded by the American people.