{"id":19754,"date":"2026-04-28T09:11:12","date_gmt":"2026-04-28T09:11:12","guid":{"rendered":"https:\/\/www.europesays.com\/ai\/19754\/"},"modified":"2026-04-28T09:11:12","modified_gmt":"2026-04-28T09:11:12","slug":"trade-secrets-risk-exiting-a-one-way-door-when-data-is-fed-to-ai","status":"publish","type":"post","link":"https:\/\/www.europesays.com\/ai\/19754\/","title":{"rendered":"Trade Secrets Risk Exiting a One-Way Door When Data Is Fed to AI"},"content":{"rendered":"<p>Trade secret exposure can happen before artificial intelligence systems receive \u201ctraining data.\u201d Once information enters an AI-enabled system, there is no reliable or practical way to fully withdraw it from most large language models or agent-based workflows.<\/p>\n<p>Focusing only on whether an artificial intelligence provider \u201ctrains on user data\u201d misses the more immediate source of trade secret exposure. Most leakage occurs outside formal training pipelines through routine employee use.<\/p>\n<p>Employees and contractors paste source code, design documents, technical specifications, and internal analyses into AI tools to debug errors, summarize materials, or draft work product under time pressure. These disclosures are often informal, undocumented, and repeated across teams. Sensitive information therefore can leave the organization long before any training dataset is involved.<\/p>\n<p>Because trade secret dissemination may be nearly impossible to undo once confidential information leaves controlled channels, the most practical approach rests on three pillars:<\/p>\n<p>Strong front-end controls to reduce disclosure risks A structured response plan to contain dissemination and document protective effortsEarly engagement with experienced counsel to align governance, contracts, and remediation strategies with evolving AI systems. <img decoding=\"async\" src=\"https:\/\/www.europesays.com\/ai\/wp-content\/uploads\/2026\/04\/1777367472_863_.png\" data-alignment=\"center\" data-size=\"embedded\"\/> Rising AI Agents<\/p>\n<p>Trade secrets typically leak in two ways: through incorporation into a model\u2019s training dataset or through routine interaction with AI tools. In practice, the latter is often the faster and more common pathway.<\/p>\n<p>AI agents and workflow-integrated systems amplify this risk. Unlike a one-off prompt, many agent systems retain memory, logs, embeddings, or intermediate summaries across sessions and connected tools. Information disclosed once can be reused in later outputs, propagated to other systems, or incorporated into downstream processes without the user realizing it.<\/p>\n<p>From a legal standpoint, secrecy can be lost through retention, reuse, or redistribution of information even if the data never enters a formal training dataset.<\/p>\n<p>Workforce surveys suggest the issue is already widespread. Roughly four in 10 employees report entering sensitive workplace information into AI tools without employer authorization, according to <a href=\"https:\/\/www.theregister.com\/2025\/10\/07\/gen_ai_shadow_it_secrets\/\" rel=\"nofollow noopener\" target=\"_blank\">Security Management<\/a>. Disclosure also occurs inadvertently through embedded AI features in software such as grammar assistants, document editors, code completion tools, and collaboration platforms.<\/p>\n<p>Against that backdrop, many organizations ask whether confidential code or documentation can be removed once it enters an AI system. With current technology, the answer is usually no.<\/p>\n<p>Organizations therefore should implement structural safeguards. Sensitive material shouldn\u2019t be entered into consumer-facing AI interfaces or public chatbot links. Where AI functionality is necessary, companies should obtain enterprise licenses that provide contractual controls over data use, retention, and auditability and that restrict training or secondary use of customer inputs.<\/p>\n<p>READ MORE: IP vs. AI: A New Frontier of Legal Battles to Protect Creativity<\/p>\n<p>Technically Difficult Deletion<\/p>\n<p>LLMs blend patterns from their inputs into complex internal representations, including model parameters, embeddings, and memory structures. Once confidential material is absorbed into those systems, isolating and deleting a single company\u2019s information may be impossible without dismantling the system itself.<\/p>\n<p>Researchers are exploring machine unlearning and model editing, but these approaches remain experimental. Recent work <a href=\"https:\/\/arxiv.org\/abs\/2410.08827\" rel=\"nofollow noopener\" target=\"_blank\">shows<\/a> that supposedly \u201cunlearned\u201d content can sometimes be partially recovered.<\/p>\n<p>In practice, providers can suppress outputs with filters, but suppression isn\u2019t deletion. The closest remedy is retraining or rebuilding the system on clean data. For frontier models, that can cost tens or hundreds of millions of dollars. The <a href=\"https:\/\/aiindex.stanford.edu\/report\/\" rel=\"nofollow noopener\" target=\"_blank\">Stanford HAI AI Index<\/a> estimates training costs of roughly $78 million for GPT-4 and $191 million for Gemini Ultra.<\/p>\n<p>Development Pipeline Contamination<\/p>\n<p>Courts have recognized similar problems in traditional technology disputes. Once trade secrets are incorporated into complex systems, assurances of non-use are often insufficient.<\/p>\n<p>In <a href=\"https:\/\/www.bloomberglaw.com\/public\/document\/Waymo_LLC_v_Uber_Technologies_Inc_et_al_Docket_No_317cv00939_ND_C\/17?doc_id=X1Q6O4AN5LO2\" rel=\"nofollow noopener\" target=\"_blank\">Waymo LLC v. Uber Technologies, Inc.<\/a>, the court focused on whether Waymo\u2019s autonomous-vehicle design information had already been integrated into Uber\u2019s engineering processes. Once incorporated into a development pipeline, the information could influence technical decisions in ways that could not be reliably isolated or reversed. The court treated that contamination as irreparable harm.<\/p>\n<p>Regulators have taken similar positions in AI contexts. In <a href=\"https:\/\/www.ftc.gov\/legal-library\/browse\/cases-proceedings\/192-3172-everalbum-inc-matter\" rel=\"nofollow noopener\" target=\"_blank\">FTC v. Everalbum, Inc.<\/a>, the Federal Trade Commission required deletion not only of improperly obtained biometric data but also of the AI models trained on that data.<\/p>\n<p>Together, these cases reflect a consistent principle: When information is integrated into systems designed to retain and reuse data, the loss of secrecy occurs at incorporation.<\/p>\n<p>NDA, Confidentiality Breakdown<\/p>\n<p>Traditional nondisclosure agreements assume limited disclosure in controlled settings. AI agents and integrated workflows undermine that assumption.<\/p>\n<p>If a system retains prompts, logs interactions, or stores embeddings for reuse, a trade secret may persist beyond the original task and user. A single paste of proprietary code into an AI-enabled workflow can later appear in outputs, be routed into other tools, or be reused across prompts.<\/p>\n<p>What feels like one disclosure can become many disclosures across a toolchain.<\/p>\n<p>An Incomplete Remedy<\/p>\n<p>Takedown requests and formal notices to AI providers still matter and should be handled promptly. Disclosure may occur through routine interaction with AI-enabled features embedded in business applications.<\/p>\n<p>A well-prepared notice can limit further dissemination and preserve contractual and statutory remedies. It also creates a record that the company acted promptly to protect its secrecy.<\/p>\n<p>Even so, takedowns rarely restore confidentiality once information leaves controlled channels.<\/p>\n<p>Governance as Protection<\/p>\n<p>In this environment, the most effective protection is governance at the front end. Companies should treat public AI systems as untrusted for sensitive material and prohibit entering source code, specifications, or design documents into public chatbots.<\/p>\n<p>Agreements with employees, contractors, licensees, and AI vendors should address AI use explicitly. Policies should cover agent memory, logs, embeddings, and tool integrations, not just \u201ctraining data.\u201d<\/p>\n<p>Where workflows involve persistent agents, organizations should prohibit feeding confidential material into systems that retain prompts or reuse content across sessions unless the company controls the environment and can enforce deletion and auditability.<\/p>\n<p>This article does not necessarily reflect the opinion of Bloomberg Industry Group, Inc., the publisher of Bloomberg Law, Bloomberg Tax, and Bloomberg Government, or its owners.<\/p>\n<p>Author Information<\/p>\n<p><a href=\"https:\/\/www.venable.com\/professionals\/p\/justin-e-pierce\" rel=\"nofollow noopener\" target=\"_blank\">Justin Pierce<\/a> is co-chair of Venable\u2019s intellectual property division with expertise in artificial intelligence and innovative technologies.<\/p>\n<p><a href=\"https:\/\/www.venable.com\/professionals\/p\/brandon-phemester\" rel=\"nofollow noopener\" target=\"_blank\">Brandon Phemester<\/a> is a patent agent with Venable with expertise in artificial intelligence, biotechnology, pharmaceuticals, and advanced manufacturing.<\/p>\n<p>Write for Us: <a href=\"https:\/\/news.bloombergtax.com\/tax-insights-and-commentary\/author-submission-guidelines-for-bloomberg-tax-law-insights\" rel=\"nofollow noopener\" target=\"_blank\">Author Guidelines<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"Trade secret exposure can happen before artificial intelligence systems receive \u201ctraining data.\u201d Once information enters an AI-enabled system,&hellip;\n","protected":false},"author":2,"featured_media":19755,"comment_status":"","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[405,7537,14011,1642,3919],"class_list":{"0":"post-19754","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-agentic-ai","8":"tag-ai-agents","9":"tag-artificial-intelligence-agents","10":"tag-job-training","11":"tag-large-language-models","12":"tag-trade-secrets"},"_links":{"self":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/19754","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/comments?post=19754"}],"version-history":[{"count":0,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/posts\/19754\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media\/19755"}],"wp:attachment":[{"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/media?parent=19754"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/categories?post=19754"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.europesays.com\/ai\/wp-json\/wp\/v2\/tags?post=19754"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}