At my business, where we rely extensively on AI to help modernize and document legacy application code, we’ve settled on an approach that focuses on managing which AI solutions our users adopt and how they use them. Instead of allowing employees to use an AI model of their choice without any type of guardrails in place, we route all calls to AI models on our network through a management platform we’ve built. The platform mediates the calls and responses to flag those that may present security or compliance risks.

With this approach, we can offer users the flexibility to work with a variety of AI models, since our platform supports offerings from all of the major vendors (OpenAI, Anthropic, Google and so on). We also allow an open-ended approach to how employees use AI. They’re not restricted to a limited range of predefined tasks or solutions; they can issue whichever prompts they want on a completely self-service basis. Yet, at the same time, routing calls through our platform allows us to enforce enterprise-level controls over what employees do with AI tools.

This is what I expect the future of AI-powered self-service to look like for IT departments. Rather than simply placing third-party AI tools in the hands of users and leaving them to their own devices, businesses will need to implement hubs that govern the way employees interact with AI. They’ll want to keep AI use cases flexible, while still retaining the ability to mediate security, compliance and performance risks.