China AWS Agentic AI

SHANGHAI, CHINA – JULY 28 2025: People visit the Amazon booth to learn about AWS Agentic AI during the World Artificial Intelligence Conference in Shanghai, China Monday, July 28, 2025. (Photo credit should read YUYU CHEN / Feature China/Future Publishing via Getty Images)

Future Publishing via Getty Images

While it appears that organizations are optimistic about implementing agentic AI, we also get the sense that most adopters are not ready to unleash a fully autonomous agent equipped to take decisions on its own.

A psychological theory proposed in the 1980s may be key to understand how to realize greater autonomy with AI agents.

Transactive memory systems (TMS) can be used to explain how these agents can work together as a “group mind” rather than a collection of silos. That is because in an agentic world, we don’t just have one smart chatbot; we have a network of specialized agents. Transactive Memory Systems provide the blueprint for moving from isolated bots to a cohesive “digital workforce.”

Agentic AI and Organizational Memory: “Who Knows What”

In organizations, TMS relies on three pillars: Specialization, Credibility, and Coordination. These translate directly into the architecture of multi-agent AI systems.

Specialization: Just as a hospital team knows the surgeon handles the scalpel and the anesthesiologist handles the vitals, agentic AI uses a “Directory Service.” One agent doesn’t need to know everything; it only needs to know which peer agent is the expert in “SQL generation” or “Market Analysis.”

Credibility: In TMS, you value a peer’s input based on their track record. Agentic AI uses this to resolve conflicts. If a “Coding Agent” and a “Generalist Agent” disagree on a script, the system’s TMS logic gives higher weight (credibility) to the specialist.

Coordination: TMS reduces “cognitive load” by automating how information moves. For an AI, this means “passing the context.” Instead of sending the entire chat history to every agent, the system only sends the relevant bits to the right expert at the right time.

Moving from “Tools” to “Teammates” with Agentic AI

Agentic AI often operates in multi-agent or human-AI teams. TMS could help model how knowledge is distributed across agents or between humans and AI, achieving distributed cognition. In multi-agent systems, each AI agent may specialize (e.g., one handles navigation, another handles dialogue). TMS can be used to achieve role specialization and setting expectations about how such systems coordinate and retrieve knowledge efficiently. Just as humans assess the credibility of others’ memories, users (or other agents) must assess the reliability of an AI’s stored or retrieved knowledge—especially when the AI is actively curating or referencing external databases. Agentic AI may update its “memory” based on new experiences or interactions, retain context, and recognize patterns and experiences. TMS provides a framework for how such systems encode, store, and retrieve knowledge in a socially or systemically distributed way. A medical diagnostic AI that collaborates with human doctors can be seen as part of a transactive memory system—where the AI stores and retrieves vast literature and case data, while the doctor contributes contextual patient knowledge.

Mapping AI agents using transactive memory helps shift AI from being an external memory aid (like a static database) to an active team member. This has dual impacts. The first is mutual awareness. In a high-functioning human team, I know that you know I’m working on the budget. In agentic AI, this is a shared state. Agents subscribe to updates about each other’s progress so they don’t repeat work. The second impact is dynamic learning. Organizational TMS isn’t static; it evolves as people learn about each other. Modern AI frameworks like LangGraph or AutoGen implement this by allowing agents to “remember” which other agents were successful at specific tasks in the past, effectively building a digital TMS over time.

Mapping transactive memory within organizations could help with efficient specialization. Instead of one massive model, you deploy specialized agents (e.g., a “Legal Agent,” a “Coding Agent”). TMS provides the framework for a Directory Service where agents know which peer to query for specific expertise. Mapping agentic AI to transactive memory also ensures trustworthiness. In human teams, we trust the person with the most experience in a field. In AI implementation, TMS helps design Consensus Mechanisms where agents weight the “opinions” of specialized peers higher than those of generalist ones.

Finally, TMS also allows agents to offload memory and achieve resilience. An “Orchestrator Agent” doesn’t need to store the details of a database schema if it knows exactly which “Database Agent” holds that memory, preventing the “Lost in the Middle” problem found in large context windows. If the specialized “Accounting Agent” goes offline, the system uses TMS-based “Directory Services” to route the task to the next best available resource.

Such a synthesis between agentic AI and transactive memory could help design, say, an autonomous supply-chain AI that monitors inventory, predicts demand, and places orders functions as an integrated information system—transforming raw data into actionable decisions while coordinating with ERP systems, suppliers, and human managers.

Responsibility Gaps with Agentic AI: The Core Problem

SUQIAN, CHINA – JUNE 10: In this photo illustration, the logo of AI Agent is displayed on a smartphone screen on June 10, 2025 in Suqian, Jiangsu Province of China. (Photo by VCG/VCG via Getty Images)

VCG via Getty Images

Agentic AI systems instantiate transactive memory architectures in which cognition, causation, and control are deliberately decoupled. Responsibility gaps arise not because responsibility disappears, but because organizations continue to assign responssibility at the level of individual agents rather than at the level of transactive cognitive design.

For autonomous coordination of agentic AI within organizations, when harms or failures occur, the chief issue is attributing responsibility. If no single human intended the outcome, and no single agent caused it in isolation with the system acting in distributed, delegated cognition, we get a familiar verdict: “Everyone contributed, but no one is responsible.”

Responsibility in agentic AI should attach not to agents, but to the design and control of the transactive memory directory. Concretely, this means a transactive memory system should be able to identify the following:

Who designates agentic roles?Who defines the routing logic?Who sets the confidence thresholds?Who decides when humans are bypassed?

Agentic AI can be understood not just as an intelligent agent, but as a node in a socio-technical memory and information network. Transactive memory explains its cognitive and collaborative role, while information systems theory explains its architectural and operational role.

Companies such as Salesforce.com have dubbed this the path towards Enterprise General Intelligence (EGI). Nomenclature aside, such a dual lens is especially useful as agentic AI becomes more embedded in collaborative, real-world decision-making, where remembering, knowing, and acting are distributed across humans, algorithms, and systems.

In agentic AI systems, humans can specialize in contextual judgment, ethical considerations, and creative problem-solving while AI agents handle data processing, pattern recognition, and procedural execution, with each party developing meta-knowledge about the other’s capabilities and limitations. This creates a form of distributed cognition where humans know to defer certain queries to AI systems (like complex calculations or information retrieval) while AI systems can flag decisions requiring human judgment (like ethical trade-offs or cultural nuance), effectively multiplying the collective intelligence beyond what either could achieve alone. The key in agentic AI is establishing clear “expertise directories” and communication protocols so all parties – human and artificial – understand not just what needs to be known, but who (or what) in the system is best positioned to know it, remember it, or act on it, preventing both redundant effort and dangerous gaps in critical decision-making processes.