Federal agencies have largely moved past the question of whether they should adopt artificial intelligence. The Trump administration’s AI Action Plan has now made that decision for them. The harder question that most agencies are now quietly wrestling with is how.
Successful implementation starts with the right framing. Agencies starting their journey with “What’s our AI strategy?” are likely to fail. Rather, leaders should ask themselves, “How can AI enable our strategy?” That said, knowing you need strategic clarity and building the organizational machinery to act on it are two fundamentally different challenges; the gap between insight and execution is where most AI initiatives struggle to deliver meaningful impact.
The answer is not a new technology platform or tool because technology is only as impactful as the policies and people behind it. Nor is it in reorganizing IT departments. While shifting people in certain scenarios can be impactful, it is the mindset and adaptability of the department that is key. The solution lies in a structural model that Dr. John Kotter first introduced in the book “Accelerate: The Dual Operating System (DOS).”
Why traditional hierarchies can’t handle AI transformation alone
Federal agencies are hierarchical by design. Hierarchy produces the reliability, accountability and consistency that government operations demand. But that same structure, optimized for steady performance, is fundamentally ill-equipped to drive the kind of rapid, iterative, cross-functional change that meaningful AI adoption requires.
]]>
When agencies attempt to layer AI transformation onto their existing hierarchy, whether through new working groups, task forces or mandated adoption timelines, they typically produce one of two outcomes. Either the initiative gets absorbed into the bureaucratic machinery and slows to a crawl, or it creates so much disruption to daily operations that resistance builds and momentum collapses.
The solution shouldn’t involve a choice between operational stability and transformational agility – it should run both simultaneously, which is precisely what the DOS enables.
The Dual Operating System: A blueprint for AI at scale
The DOS is not a structural reorganization, but rather a new operating model that unleashes the potential within agencies. It pairs the existing traditional management-driven hierarchy, which continues to handle day-to-day mission delivery, with a parallel network structure specifically designed for strategic change. The hierarchy ensures reliable performance and establishes the conditions for transformation. The network simultaneously supports and accelerates the transformation.
In the context of federal AI adoption, this means agencies do not need to halt operations, restructure entire departments or stand up a new dedicated team to experiment with AI. Instead, they should build a network of people from across the organization who are empowered to identify, test and scale AI applications in direct service of the agency’s strategic mission. The hierarchy remains to provide resources and direction, while the network provides the speed and innovation needed to implement this new technology.
Federal government and military organizations have already applied this model to drive measurable performance transformation. AFWERX stands as a prime example of the DOS in action, designed specifically to inject “start-up speed” into the traditional high-compliance hierarchy of the U.S. Air Force. The established military hierarchy focuses on the “efficiency engine” – maintaining readiness, safety and large-scale sustainment – at the same time as AFWERX operates as the “agility engine.” This network-based structure allows Airmen, private-sector innovators and academic researchers to bypass traditional procurement pathways. By leveraging their network of subject matter experts and flexible contracting, AFWERX acts as a strategic bridge and compresses the time-to-award cycle for small businesses from an average of 228 days to 125 days. This significantly accelerates the deployment of mission-critical technology, proving that the network does not need to replace hierarchy to evolve at the speed of relevance.
The architecture works, but only when its core components are activated deliberately.
Activating a ‘volunteer army:’ The human engine of AI transformation
Roughly 10% of an organization’s workforce, when properly engaged, is sufficient to drive enterprise-wide change. They do not need to be reassigned or given new job titles – they simply need a compelling opportunity and true permission to act.
]]>
Mandated participation produces compliance and oftentimes resistance, but voluntary participation produces ownership. When federal employees choose to join AI experimentation efforts because they see a genuine connection between those efforts and the mission they care about, they bring creativity, institutional knowledge and peer influence that no top-down directive can replicate. Crucially, these practitioner-led solutions are more likely to address core needs since those closest to the work are the ones identifying where the greatest friction exists, making their innovations immediately effective.
Practically, this means creating structured opportunities for employees at all levels to experiment with AI tools in defined sandbox environments, share what they learn with colleagues and contribute to identifying high-value use cases. These contributions must be recognized and celebrated, especially when experiments fail. Agencies that reward only successful outcomes risk jeopardizing the desire for innovation and experimentation as volunteers retreat to the safety of the status quo.
Moving from pilots to scale: Measuring transformation, not activity
One of the most persistent failures in federal AI adoption is the pilot that never scales. Agencies often launch isolated experiments, declare early success, and then watch the initiative stall when it encounters the friction of enterprise-wide adoption. Procurement constraints, workforce resistance, legacy system incompatibilities, or simply a lack of organizational will to push through the adoption process can stop pilot progress in its tracks.
The DOS addresses this directly. Because the network operates in parallel with the hierarchy, rather than waiting for hierarchical communications to drive adoption and understanding, it can move pilots toward scale with a speed that traditional change management processes cannot match. However, speed must also come with measurement of the right things.
There’s temptation to report on activities like the number of AI tools deployed, percentage of workforce trained or use cases documented. These metrics are easy to generate and present to oversight bodies as leading indicators, but they largely lack meaning when seeking true signs of transformation. The agencies that will realize lasting value from AI are those measuring progress against strategic outcomes, such as mission delivery speed, decision quality and administrative burden reduction, then using those results to communicate wins that inspire broader adoption.
Managing the human side: Making federal employees partners, not subjects
Federal employees face a uniquely complex set of concerns around AI that their private-sector counterparts do not. Job security anxieties are amplified by public scrutiny and media coverage that shape what can and cannot be mandated. Civil service protections mean that resistance is difficult to overcome through traditional management levers.
This is precisely why the volunteer model matters so much in government. When employees are invited into the transformation as active contributors, rather than subjected to it as passive recipients of technology mandates, the dynamic fundamentally shifts. They become partners in solving strategic challenges versus targets of a change program.
Agencies that succeed at this are consistently focusing on:
Transparency about the purpose of AI adoption, connecting it explicitly to mission outcomes that employees care about, not to cost reduction or headcount efficiency.
Peer-to-peer communication channels where employees share AI discoveries with each other, rather than relying solely on top-down communications that can feel like mandates.
The choice isn’t stability or transformation, it’s both
The federal agencies that will be at the forefront of AI adoption are not necessarily those with the largest technology budgets or the most sophisticated tools. They are the ones that will implement an operating model that makes strategic AI implementation actually happen – running operational excellence and transformational change simultaneously, activating the energy of their workforce through voluntary participation and measuring success by mission impact over adoption metrics.
]]>
The DOS is not a silver bullet. It requires sustained leadership commitment, deliberate coalition-building, and a genuine willingness to create space for experimentation and failure. But it is the most proven framework available for organizations that need to change fast without breaking what already works.
Federal agencies do not have the luxury of choosing between stability and transformation because the AI Action Plan demands both.
Laurin Parthemos is the public sector lead at Kotter.
Copyright
© 2026 Federal News Network. All rights reserved. This website is not intended for users located within the European Economic Area.