Behavioural risks cover goal misalignment, specification gaming, misinterpretation of human intent, and deceptive conduct. The paper notes that an AI agent tasked with maximising system uptime might choose to disable security updates to avoid reboots, fulfilling its objective while undermining protective controls. Structural risks stem from tightly linked agents, tools, and data pipelines. The guidance describes how relatively minor orchestration errors can lead to repeated replanning, increased tool calls, resource strain, and cascading failures. In some scenarios, outputs based on hallucinated or incorrect information from one agent can be treated as valid inputs by others. Third‑party components add further risk where tools are misconfigured, impersonated, or allowed to load untrusted code. The document also points to accountability challenges when multiple agents collaborate on tasks, such as approving payments or updating records. Opaque internal reasoning and fragmented logging can make it difficult to reconstruct how a specific outcome occurred or where responsibility sits.