Artificial intelligence tools promise more personalized support for students—but without safeguards, they can also reinforce the very inequities colleges are trying to eliminate. At State University of New York, a new systemwide AI policy aims to expand those tools while setting guardrails around how they shape students’ learning, support services and academic outcomes.
During a Board of Trustees meeting last week, university leaders outlined a framework to scale AI use across the system’s 64 campuses while requiring training in responsible use, embedding AI literacy into the general education curriculum and expanding student access to research and learning opportunities.
Some of those efforts are already underway. A new cohort of AI for the Public Good Fellows, which are made up of 20 SUNY faculty and staff members across disciplines, will work with colleagues to integrate artificial intelligence into coursework and help students build skills to evaluate and use the technology responsibly. At the same time, systemwide initiatives like the Empire AI consortium and a new independent AI research center at State University of New York at Binghamton aim to connect students to advanced computing resources, research experiences and workforce pathways tied to AI.
The policy also calls for institutions to evaluate AI tools for bias, strengthen data-privacy protections and apply greater oversight to AI systems used in processes affecting students, such as tracking their academic progress or accessing campus resources.
Jesse Sloman, SUNY’s chief information security officer, said the systemwide policy is intended to improve student success by expanding high-impact uses of AI, including advising and early-alert systems that help identify when students may need additional support.
He pointed to opportunities to use AI to supplement academic support services. “One is giving us more bandwidth to provide personalized tutoring to students,” Sloman said. “We’re not seeking to replace faculty, but to augment what they’re able to do and give students more academic assistance tools, and to better understand over time where interventions may be necessary or where a student may be struggling.” He noted that his team contributed to the systemwide policy.
Sloman said the framework also aligns with emerging AI governance efforts from the state and Gov. Kathy Hochul’s office, particularly around data privacy and security.
“One of our major concerns is making sure that SUNY data—including students’ personal information and academic records—is protected,” he said. “We don’t want a SUNY student using a SUNY AI tool and have that data used to train external models outside of narrow, contractually defined terms.”
He added that any model training would be limited to internal SUNY use, not broader commercial improvement of the underlying AI systems.

A roundtable discussion of AI for the Public Good Fellows at a State University of New York campus.
Valerie Caverniss/State University of New York
The systemwide policy: As part of the systemwide policy, all SUNY campuses must adopt or update their own AI guidelines by Dec. 31 of this year, a timeline that university leaders say is intended to accelerate how quickly students can see more consistent use of AI tools across campuses. Institutions can request a one-time extension of up to two months.
Each campus policy at SUNY will require guidelines to focus on how the technology is used in student learning, support services and institutional decision-making. At a minimum, campus policies must:
Clarify the roles and responsibilities of faculty, staff and students in overseeing and using AI systems.Add safeguards to procurement processes to protect SUNY data, prevent discriminatory or biased AI use, and preserve human decision-making authority.Account for differences across teaching, research and administrative uses of AI, including considerations around academic freedom, shared governance, intellectual property and regulatory compliance.Provide training to ensure faculty, staff and students can use AI tools safely, ethically and effectively.Evaluate AI tools for bias and implement safeguards to protect student data and institutional systems.Apply greater oversight to higher-risk AI systems, particularly those that could influence outcomes affecting students’ academic progress, access to resources or overall well-being.Regularly review and update policies to keep pace with changes in AI technology, regulation and campus practices.
Sloman said the policy reflects months of coordination and a systemwide comment period, making it important to provide campuses with “comprehensive guidance while also preserving campus autonomy.”
SUNY chancellor John B. King Jr. said in a statement that the framework is designed to expand the use of AI in ways that support students while maintaining oversight. “AI usage is in its infancy across much of higher education and government,” he said, adding that the policy will help campuses scale its use “to benefit students and expand research while ensuring accountability, transparency and appropriate safeguards.”
What this means: Venu Govindaraju, senior vice president for research, innovation and economic development at the University at Buffalo, said it is critical to involve all parts of the university system in conversations about AI—from students and faculty to teaching, research and campus operations.
“AI is touching all aspects of our university, and all of these need guidance,” said Govindaraju. He described the framework as intentionally broad, giving campuses a foundation while encouraging them to develop additional guardrails based on their specific needs.
He also emphasized the importance of understanding how AI systems work in order to address bias. “The responsibility then is to ensure that from the ground up there is explainability and transparency—so that whatever is being designed is understood, and when we transfer that AI tool to someone else, they also know how it is using the data.”
Sloman said the systemwide approach is intended to avoid unnecessary duplication while still allowing campuses to tailor policies to their own risks and needs.
“We don’t want campuses to recreate all of their existing policies in a separate AI document,” Sloman said. “Instead, they should think about how AI fits into their existing policy frameworks and update those where necessary—or develop a stand-alone policy if needed.
“We’re not trying to create an arduous review process for every possible type of AI use case,” he said. “We want to be able to reserve our scrutiny for AI cases where it’s necessary, and then for other cases where the risk is lower, not [create] a barrier to innovation or adoption.”
Get more content like this directly to your inbox. Subscribe here.