No Jitter Brief:
Glean introduced its enterprise Agent Development Lifecycle (ADLC) on Tuesday. The ADLC is a “framework for how enterprises can think about building, launching, governing and measuring agents as they move from experiments into production,” Selene Kim, Product Manager (Agents) at Glean told No Jitter in an email. “It is an operating model for taking agents from idea to production.”
Glean’s ADLC introduces various capabilities. With Auto Mode Agent Builder, users can describe what they want an agent to do in natural language; the agent can then plan, reason and execute across the enterprise graph without predefined workflows or manual configuration. Debug & Trace Views provides full observability into every agent run, including inputs, tool calls, LLM decisions and outputs. This allows users to diagnose failures precisely rather than inferring from final output, per the release.
Other features include Agent Library controls which allows ADLC to function as a “governed front door for agent distribution.” ADLC can also block or flag sensitive content before an agent can process it or restrict certain user groups from using agents to write to systems of record. “Our key learning, which drove the creation of the ADLC, is that agents are just another type of software. In much the same way that traditional [software development life cycle] practices apply across different programming languages, the ADLC is applicable across different models from different providers,” Kim said.
No Jitter Insight:
Glean began in enterprise search, connecting “all of a company’s knowledge and data and information that’s in many different enterprise systems,” Glean CEO Arvind Jain told No Jitter back in February 2024. “The ADLC is the next step: helping companies not just build agents, but operationalize them in a repeatable, governed way,” Kim said.
Just as AI models rely on data, so too do AI agents need context around that data to function effectively. The enterprise graph referenced above refers both to Glean’s own Enterprise Graph, as well as the broader concept of organizing “siloed information into organizational knowledge,” per Google Cloud, and “datasets from various sources, which frequently differ in structure,” per IBM.
Kim noted that agents with too little context may succeed as a proof-of-concept and then fail in production while those with too much context may fail because the underlying LLM cannot separate signal from noise.
“An agent has enough context when it is able to reliably complete its goal using the information at its disposal. Sometimes, this can be little as one document, like an HR team building an agent that can only reference a company’s employee handbook,” Kim said. “In practice, we recommend starting with more relevant context, getting to a high-quality result, and then shaving off what the agent can access to optimize performance.”
Kim offered several best practices for grounding AI agents in enterprise context:
Start with the minimum set of data sources, tools, examples and feedback signals the agent actually needs to do its job well.
Make context selection explicit: Don’t just point the model at a vague pile of data and hope for the best.
Use permission-aware context so the agent respects the same access controls your employees already do.
Ground agents in the real systems where work happens — docs, tickets, CRM, chat, code, and workflows — instead of isolated test data.
Invest in observability: You want to be able to see what the agent searched, what tools it called, and how it arrived at an answer or action.
“In short: strong context is not ‘more data,’ but the right governed data, tools and signals for the job,” Kim said.