Executive summary

It will come as no surprise to many that “-maxxing”—the internet’s viral shorthand for obsessive, algorithmically mediated optimization—now features in AI industry debates. “Token-maxxing” has emerged as a meme within firms encouraging employees to maximize their AI usage, measured by consumption of tokens or units of AI compute. “Outcome-maxxing” represents a seemingly sensible response, under the logic that firms should optimize for what AI produces, not how much of it is consumed. 

But neither approach engages with a more fundamental question warranted by a technology that is rapidly transforming human activity at an unprecedentedly intimate layer of cognition: Is generative AI being deployed in ways that maximize human interests? 

Cognitive agency and ‘context’ 

This working paper argues that generative AI should be assessed by whether it supports cognitive agency—or the capacity for people to think and act with AI in ways that increase their control, efficacy, and mastery. This account combines the foundational work in development economics (that identifies agency as central to human and sustainable development outcomes) with the latest research on socio-cognitive and informational agency to capture an emerging cognitive partnership between people and AI systems. 

Current evidence suggests that dominant proprietary AI deployment architectures are associated with erosion of cognitive agency rather than its expansion. By default, or by design, vendor-controlled AI platforms capture user-AI interaction data (also known as “context”) while limiting user control over how that context is stored, organized, or reused. In a growing parallel body of research, however, evidence suggests that generative AI use can increase cognitive agency when users exercise greater control over their context. These contrasting signals motivate further research into the relationship between AI deployment architectures and cognitive agency. 

Introducing context-maxxing 

In January 2026, OpenClaw—a viral open-source software that allows users to interact with AI models within their own computing environments—introduced a fundamentally different deployment paradigm for generative AI and a glimpse of what it could mean to center people’s interests within it. Using OpenClaw and other open-source agent harnesses, millions of people globally—including many non-technical users—are now exercising greater control over the information that they bring to their interactions with AI. 

This paper codifies common elements of this emerging AI deployment paradigm as “context-maxxing,” or using self-managed hardware and software to maximally control user-generated context in interactions with generative AI. The paper identifies two preliminary, mutually reinforcing components of context-maxxing: 

Five user-controlled digital infrastructure building blocks:(1) An open-source agent harness; (2) LLM access to multiple providers via Application Programming Interface (API); (3) a “context web” of human- and machine-readable information distributed across users’ knowledge management applications; (4) security protocols; and (5) persistent hosting (see Figure 1). 
Three reinforcing competencies for cognitive efficacy and mastery with AI: Specification (codifying domain knowledge into reusable context assets), Orchestration (deploying that context through structured human-AI workflows optimized for integrity and verifiability), and Exploration (reinvesting efficiency gains into new frontiers of value creation; see Figure 2). 

Figure 1. Two generative AI deployment infrastructures in 2026

Note: Figure depicts proprietary platform user interfaces (left) versus context-maxxing (right). In context-maxxing, an open-source harness allows users to more meaningfully control and accumulate context within a self-managed computing environment (blue space). Proprietary platform user interfaces (grey space) typically limit user control over context by default or by design. Dotted-line boxes denote features of the AI deployment infrastructure that users can reliably control. Arrows denote flow of information between infrastructure building blocks. Proprietary platform user interfaces increasingly offer direct integration with user-controlled applications (e.g., email, storage, calendar), denoted by the dotted arrow lines.

Figure 2. Three reinforcing competencies of context-maxxing

Paper contributions 

The working paper is structured in three parts. It begins by analyzing existing deployment paradigms and cognitive agency (Part 1), before presenting a context-maxxing playbook for policymakers or decisionmakers working on or with generative AI (Part 2). Focus areas for ongoing research and public policy development are outlined in Part 3. 

Implications and future directions 

This work suggests that computing environments that prioritize user control over context could lay the foundation for entirely new human expertise and capabilities for shared problem-solving. This in turn raises the question about how public investment and regulation can lower barriers to context-maxxing for people everywhere—including in sustainable development contexts where the infrastructure and resources to do so remain unevenly distributed. 

This paper adopts the language of “context-maxxing” not only to meet an emerging industry discourse on its own terms, but also as a deliberate gesture to the need to complement traditional public policy research and analysis with more agile approaches that are at once technically rigorous and culturally and politically legible. In this spirit, the OpenClaw movement motivates innovation in policymaking itself—a shift from analysis and synthesis toward experimental prototyping, whereby policymakers are now more able than ever to directly cultivate and test working models of the interventions they seek to endorse—and grow them at the edge of the social systems where generative AI is being deployed. 

Download the full working paper.