Members of our Community Editorial Board, a group of community residents who are engaged with and passionate about local issues, respond to the following question: During the recent special session, Colorado legislators failed to agree on an update to the state’s yet-to-be-implemented artificial intelligence law, despite concerns from the tech industry that the current law will make compliance onerous. Your take?

Colorado’s artificial intelligence law, passed in 2024 but not yet in effect, aims to regulate high-risk AI systems by requiring companies to assess risk, disclose how AI is used and avoid discriminatory outcomes. But as its 2026 rollout approaches, tech companies and Governor Polis argue the rules are too vague and costly to implement. Polis has pushed for a delay to preserve Colorado’s competitiveness, and the Trump administration’s AI Action Plan has added pressure by threatening to withhold federal funds from states with “burdensome” AI laws. The failure to update the law reflects a deeper tension: how to regulate fast-moving technology without undercutting economic growth.

Progressive lawmakers want people to have rights to see, correct and challenge the data that AI systems use against them. If an algorithm denies you a job, a loan or health coverage, you should be able to understand why. On paper, this sounds straightforward. In practice, it runs into the way today’s AI systems actually work.

Large language models like ChatGPT illustrate the challenge. They don’t rely on fixed rules that can be traced line by line. Instead, they are trained on massive datasets and learn statistical patterns in language. Input text is broken into words or parts of a word (tokens), converted into numbers, and run through enormous matrices containing billions of learned weights. These weights capture how strongly tokens relate to one another and generate probabilities for what word is most likely to come next. From that distribution, the model picks an output, sometimes the top choice, sometimes a less likely one. In other words, there are two layers of uncertainty: first in the training data, which bakes human biases into the model, and then in the inference process, which selects from a range of outputs. The same input can therefore yield different results, and even when it doesn’t, there is no simple way to point to a specific line of data that caused the outcome. Transparency is elusive because auditing a model at this scale is less like tracing a flowchart and more like untangling billions of connections.

These layers of uncertainty combine with two broader challenges. Research has not yet shown whether AI systems discriminate more or less than humans making similar decisions. The risks are real, but so is the uncertainty. And without federal rules, states are locked in competition. Companies can relocate to jurisdictions with looser standards. That puts Colorado in a bind: trying to protect consumers without losing its tech edge.

Here’s where I land: Regulating AI is difficult because neither lawmakers nor the engineers who build these systems can fully explain how specific outputs are produced. Still, in sensitive areas like housing, employment, or public benefits, companies should not be allowed to hide behind complexity. Full transparency may be impossible, but clear rules are not. Disclosure of AI use should be mandatory today, and liability should follow: If a system produces discriminatory results, the company should face lawsuits as it would for any other harmful product. It is striking that a technology whose outputs cannot be traced to clear causes is already in widespread use; in most industries, such a product would never be released, but AI has become too central to economic competitiveness to wait for full clarity. And since we lack evidence on whether AI is better or worse than human decision-making, banning it outright is not realistic. These models will remain an active area of research for years, and regulation will have to evolve with them. For now, disclosure should come first. The rest can wait, but delay must not become retreat.

Hernán Villanueva, chvillanuevap@gmail.com

Years ago, during a Senate hearing into Facebook, senators were grilling Mark Zuckerberg, and it was clear they had no idea how the internet works. One senator didn’t understand why Facebook had to run ads. It took Zuckerberg a minute to understand the senator’s question, as he couldn’t imagine anyone being that ignorant on the subject of the hearing! Yet these senators write and enact laws governing Facebook.

Society does a lot of that. Boulder does this with homelessness and climate change. They understand neither, yet create and pass laws, which, predictably, do nothing, or sometimes, make the problem worse. Colorado has done it before, as well, when it enacted a law requiring renewable energy and listed hydrogen as an energy source. Hydrogen is only an energy source when it is separated from oxygen, like in the sun. On Earth, hydrogen is always bound to another element and, therefore, it is not an energy source; it is an energy carrier. Colorado continued regulating things it doesn’t understand with the Colorado AI Act (CAIA), which shows a fundamental misunderstanding of how deep learning and Large Language Models, the central technologies of AI today, work.

The incentive to control malicious AI behavior is understandable. If AI companies are creating these on purpose, let’s get after them. But they aren’t. But bias does exist in AI programs. The bias comes from the data used to train the AI model. Biased in what way, though? Critics contend that loan applications are biased against people of color, even when a person’s race is not represented in the data. The bias isn’t on race. It is possibly based on the person’s address, education or credit score. Banks want to bias applicants based on these factors. Why? Because it correlates with the applicant’s ability to pay back the loan.

If the CAIA makes it impossible for banks to legally use AI to screen loan applicants, are we better off? Have we eliminated bias? Absolutely not. If a human is involved, we have bias. In fact, our only hope to eliminate bias is with AI, though we aren’t there yet because of the aforementioned data issue. So we’d still have bias, but now loans would take longer to process.

Today, there is little demand for ditch diggers. We have backhoes and bulldozers that handle most of that work. These inventions put a lot of ditch diggers out of work. Are we, as a society, better for these inventions? I think so. AI might be fundamentally different from heavy equipment, but it might not be. AI is a tool that can help eliminate drudgery. It can speed up the reading of X-rays and CT scans, thereby giving us better medical care. AI won’t be perfect. Nothing created by humans can be. But we need to be cautious in slowing the development of these life-transforming tools.

Bill Wright, bill@wwwright.com