UCLA campus

LOS ANGELES, CALIFORNIA -UCLA campus

Getty Images

For everyone in higher ed who has spent the past two years asking how to operationalize AI governance, April produced an unusually clear answer: build it at procurement. On April 6, the National Student Legal Defense Network released the Student AI Bill of Rights through its SHAPE AI Initiative — the most concrete student-facing framework yet for what transparency, oversight, data sovereignty, safe use, and AI literacy should look like on campus. Florida’s Senate passed its own AI Bill of Rights, SB 482, by a 35-2 vote in March. And on March 30, California Governor Gavin Newsom signed Executive Order N-5-26, requiring state agencies to certify AI vendors before contracting. Different documents, same direction: institutions now have a public, citable inventory of what good AI governance is supposed to do. The work ahead is wiring it into how decisions actually get made.

That work is timely, because adoption has run ahead of policy on most campuses, and that is the productive starting point for 2026. EDUCAUSE’s January 2026 study found that 94% of higher ed staff had used AI tools at work in the past six months, while 46% could not point to a governing institutional policy, and 56% had used tools their institution did not provide. Those numbers describe a faculty and staff that is enthusiastic, capable, and ready alongside an institutional infrastructure that is still catching up. The opportunity now is to close that gap not by adding another layer of policy on top of existing use, but by giving the people already adopting AI a clear, fast, defensible path through governance.

The lever that reaches that gap is procurement. It is where federal guidance, state legislation, and accreditor expectations are converging, and it is the one layer that operates before a tool reaches a desk. Newsom’s order requires vendor attestation on bias, civil rights, and content safeguards before contracts close. The Middle States Commission on Higher Education, in its July 2025 guidance, signaled that AI procurement and use will be reviewed alongside existing accreditation standards. Across 31 states, 134 AI-in-education bills are working through this session, most of them turning on the same productive question: what review happened before the tool was bought?

The good news is that most institutions already have the components for that review. There is a policy committee, an academic integrity working group, an IT security process, and a procurement office. What is new, and what the Student AI Bill of Rights now makes designable, is the chance to connect those components into one named step that fires before any AI tool that would touch a person at the institution goes live. Call it the pre-production gate. It is the single, identified review where a vendor proposal, or an internal AI deployment, demonstrates what it does, who it affects, and how concerns will be surfaced, with someone empowered to say yes, no, or “come back with more.”

Purdue University is a model worth studying. Its Data Ethics Committee, formalized in 2025 and co-chaired by Ian Pytlarz, Principal Data Scientist in Purdue’s Institutional Data Analytics + Assessment office. His office reviews every proposed use of generative AI that would touch a student, faculty member, or staff member at Purdue before it moves into production. “Before something goes into production, we want to know about it,” Pytlarz told me. “It could be a student, it could be staff, it could be affecting a system that is used by students and staff.” The mandate is intentionally broad and the process is intentionally fast. Most cases clear asynchronously, with full-group review reserved for higher-stakes uses involving student data, mental health, or other personally identifiable information.

What makes the gate useful is that it is built around evidence. “It is shocking the number of companies that are vendors that come to us and want to do something and they say, ‘We just built an AI and it does this,’” Pytlarz said. “Can you prove that it does what you say it does?” That single question is the heart of a workable gate. Pytlarz noted that a vendor pitching mental-health AI to Purdue students “got rejected pretty roundly” for not being able to answer it. The structure is approachable enough that Pytlarz and his co-chair run it on top of their primary jobs, refining the process as new use cases come in. This is what Pytlarz calls “us sitting in a room and trying to hash it out … building the plane in the air.” That candor is part of why the Purdue model travels. Most institutions will build their gate the same way Purdue did: by starting with the people, the questions, and the authority already in the building.

For provosts and CIOs, this is a moment when the path forward is unusually well-marked. Read against any campus’s existing workflow, the Student AI Bill of Rights becomes a usable inventory of the questions a pre-production gate is designed to answer. Is the use disclosed to the people affected? Is there meaningful human review on consequential decisions? Is student data, including original coursework, protected from external models? Are detection or surveillance tools being introduced thoughtfully? Are the tools required for a course free of charge to students? Naming where in the workflow those questions live, who has the authority to require evidence, and who can say no produces a structure that accreditors can review and students can trust.

The institutions that build this in 2026 will be the ones that show up to the next accreditation cycle, the next state law, and the next class of students with a working answer. They will not have to assert that their AI principles are real. The gate will demonstrate it. That is a stronger place to lead from than any framework on its own, and it is well within reach for any institution willing to wire the components it already has into a single, accountable step.