AI is changing how engineering teams work faster than most organizations can adapt. Coding assistants are now part of the daily workflow, agents are starting to own tasks end-to-end, and the way we deliver software is being redefined in real time.
With that shift, engineering leaders are facing a new set of questions. Are these tools actually improving outcomes? Where are they falling short? Which teams are seeing value, and which aren’t? And how do you report the ROI of an investment that touches every developer, every day, in ways that are hard to see?
To help leaders answer these questions, DX recently released four major new features and products to help engineering organizations navigate the shift to AI-native software delivery. They include an AI chat interface to interact with your data, visibility into how much code is generated by AI, proactive alerts based on SLAs, and a new way to measure agent effectiveness and where to invest to close gaps.
Here’s an overview of each release.
AI Code Insights: Deeper insight into AI-generated code
As AI coding tool adoption matures, the key question for engineering leaders is whether AI is being used effectively and if the investment is justified.
Leaders face critical questions that are difficult to answer with existing tooling. How are developers actually using these tools, and what are they spending on tokens? Are teams providing AI with sufficient context and clarity to generate useful output? Are codebases structured in ways that AI can effectively navigate?
AI Code Insights gives organizations real-time visibility into how AI tools are being used and where they’re falling short. It’s comprised of three main reports:
AI-generated code attribution. See exactly which commits and PRs contain AI-generated code. The AI Code Overview report lets you explore critical SDLC metrics bucketed by the percentage of AI in each PR, so you can start to understand how AI-generated code moves through your development process differently than human-written code.
Agent Experience insights. Surface insights directly from AI agents about where they’re hitting bottlenecks during a session: missing context, ambiguous instructions, structural barriers in the codebase. This is a fundamentally new signal: rather than inferring effectiveness from output, you’re hearing from the agent itself. The Agent Experience report provides an overall agent effectiveness score across your organization, filterable by team, with a detailed per-session view. (More on this below.)
AI Dollar Impact. Tie it all together with the AI Dollar Impact report, which translates your AI tool investment into an estimated net dollar figure. Rather than debating whether AI is “worth it” in the abstract, leaders can see a concrete financial picture of what their investment is producing, making board-level reporting and budget conversations significantly easier.
AI Code Insights is powered by a lightweight, self-hosted CLI daemon that runs on developer machines across macOS, Windows, and Linux. It supports all major AI coding tools.
Agent Experience: A new way to measure the effectiveness of agents
Over the past decade, the industry has learned that developer effectiveness is not primarily about the individual. It’s about the environment: short feedback loops, clear context, well‑structured systems, and minimal friction. Organizations that invest in these conditions consistently outperform those that don’t. Agents are no different.
An agent working against a well‑documented, modular codebase with clear instructions will dramatically outperform the same agent working against a monolith with sparse documentation and ambiguous prompts. The difference is less about the model and more about the conditions it operates in.
DX was founded on a simple principle: the best way to measure developer effectiveness is to go directly to developers. We’ve applied that same idea to agents. Agent Experience captures what agents encounter in their work so organizations can systematically improve the conditions that determine whether agents succeed or fail.
Unlike human developers, who require surveys and sampling to surface friction, agents already have full visibility into their own context. They know when documentation is missing, when instructions are ambiguous, and when a codebase is difficult to navigate.
Agent Experience captures this signal across three components, assessed per session—each scored and accompanied by a qualitative comment from the agent explaining what it encountered:
Requirements. Was the session’s goal clearly defined, and did the opening prompt give me enough context (files, patterns, conventions, examples) to start working toward the goal?
Steering. As the session progressed, did the developer provide helpful feedback and supply the additional context needed to keep moving toward the goal?
Scope. Was the session appropriately scoped, or was it too broad, too fragmented, or shifting?
Every session produces an overall score along with three component scores, each accompanied by the agent’s own commentary.
This data is actionable at every level. Platform teams use it to identify where to invest in AI readiness: improving documentation, adding context files, and restructuring code. Engineering leaders use it as a lens on organizational readiness for agentic delivery. Individual developers use it to improve how they prompt, structure repos, and guide agents session by session.

Pulse: Signals delivered directly to you
Frontline engineering managers have access to more data than ever. Between dashboards, reports, and review cycles, the signal is there, but finding it takes time most managers don’t have.
Pulse is a proactive alerting layer built specifically for this problem. Every week, it delivers a single, consolidated alert that rolls up the metrics and signals that matter most for your team, delivered directly to Slack or Teams.
Your weekly Pulse alert shows what shifted, what needs attention, and what’s trending across your team’s data. Whether you lead one team or a vertical across many teams, Pulse intelligently surfaces the areas you need to drill into, without you having to look for them.
When a Pulse alert surfaces something worth investigating, you can go deeper immediately through DX AI, asking follow-up questions in context to understand the underlying drivers and build an action plan.

Engineering managers can activate Pulse through the DX AI sidebar or by being invited by an organization admin. Once enabled, weekly alerts begin immediately. Managers can customize their alert content, cadence, and preferences to focus on the signals that matter most.
DX AI: An interface to explore your DX data
Data is only as valuable as your ability to understand and act on it. Dashboards provide a useful high-level view, but digging deeper into specific trends or answering spontaneous questions typically requires context switching and manual analysis.
DX AI is a conversational interface with direct access to your organization’s DX data, both system-reported and self-reported. It surfaces via a new sidebar available across the entire DX platform. Whether you’re investigating a metric, building a business case for an investment, or analyzing your latest snapshot results, you can start a conversation and get immediate, context-aware answers.
There are two ways to start. You can open the sidebar and ask anything directly. Or you can start from a Pulse alert—when Pulse surfaces something worth investigating, a spike in cycle time, a drop in throughput, a shift in developer sentiment—you can open it and begin asking follow-up questions immediately.
What you can do with DX AI:
Investigate anomalies. Ask about the underlying drivers behind a dip or spike in your metrics.
Compare teams. Contrast data points across different engineering groups.
Analyze qualitative feedback. Summarize what developers are actually saying in survey comments without reading through hundreds of responses manually.
Visualize data. When a question is best answered with a chart, the AI generates one directly in the conversation. Every visualization can be downloaded as CSV or copied to a Data Studio dashboard.

Getting started
The organizations that gain the most from AI will be the ones that understand what’s working and systematically improve the conditions around AI.
That requires proactive signals, conversational investigation, real-time visibility into the effectiveness of AI tools, and a fundamentally new feedback channel from agents themselves.
Learn more about these releases by requesting a demo at getdx.com.
Explore more at Team ’26
These new updates were unveiled at our annual user conference, Team ‘26, alongside several other awesome announcements. To dig deeper, explore our live-streamed and on-demand sessions on your own schedule.