Data is one of an organization’s most valuable assets, yet it is one of its most vulnerable, and AI is introducing more risk. One principle remains clear, Greg Campanella and Ken Feinstein of consultancy J.S. Held say: Data authenticity and integrity are foundational for AI deployment and long-term value. 

Structured data has become highly valued in the digital world, and accordingly, the risk of data manipulation and related fraud has increased. AI has enabled threat actors like terrorist groups and cybercriminals to create deepfakes and more easily gain access to environments that hold sensitive personal information. While methods existed to make fake data appear authentic before the advent of AI, new technologies have made it harder to distinguish between real and deceptive data.

Business email scams, BYOD (bring your own device) policies and falsified electronic documents, for example, are big risks to businesses. The consequences of data integrity failures can be severe: costly investigations, litigation, reputational damage and operational disruptions.

Compliance challenges in a fragmented regulatory landscape

GDPR must be considered by all organizations that do business in Europe. Since its implementation in 2018, the primary goal of the GDPR has been to protect the personal data and privacy of individuals within the European Union.

However, the European Commission (EC) recently voted favorably on a digital omnibus that would streamline rules on AI, cybersecurity and data. The package rewrites EU privacy laws and simplifies compliance, which will lower administrative costs. Companies will save an estimated €5 billion, nearly $6 billion, by 2029, according to projections. While details surrounding rules and timelines are still being fleshed out, most provisions are due to take effect by early August.

For the GDPR, the EC says the omnibus package proposes to modernize cookie rules. It also aims to simplify certain obligations for businesses and organizations “by clarifying when they must conduct data protection impact assessments and when and how to notify data breaches to supervisory authorities.”

Ensuring data is encrypted and secure is critical, but companies should note GDPR enforcement varies across countries and should develop data policies based on local and regional laws.

Furthermore, the EC is proposing amendments to the EU’s AI Act. Those amendments will include simplified technical documentation requirements for small and medium-sized enterprises. A proposed amendment would also ease compliance measures, allowing innovators to use regulatory sandboxes. The EC also notes that the omnibus package will introduce “a single-entry point where companies can meet all incident-reporting obligations.” Currently, companies must report cybersecurity incidents under several laws.

In contrast, the US has not enacted a comprehensive federal data privacy law. Instead, state-level and sector-specific laws on the federal level.

However, the Trump Administration is moving to block the patchwork of state laws regulating AI. In late 2025, the administration announced an executive order that directs the US attorney general to establish an AI litigation task force to challenge state AI laws that it deems harmful to innovation and create costly compliance requirements. The order specifically calls out Colorado’s law, highlighting its prohibition on “‘algorithmic discrimination’” as an example of harmful state overreach, arguing that such provisions compel companies to embed ideological bias and generate false results. California will arguably feel the greatest impact of the order, as it is home to 33 of the world’s highest-grossing privately held AI companies and has enacted more AI laws than any other state. The order targets California laws that require creators of AI to offer tools to help users identify AI-generated content and mandate high-level transparency regarding the data used to train models. 

As AI plays a growing role in data storage and management, the US Cybersecurity and Infrastructure Security Agency (CISA) issued guidance that recommends sourcing data from trusted providers, tracking its provenance, maintaining logs of origin and system flow and using cryptographically signed provenance databases and digital signatures to ensure integrity and prevent tampering.

AI inaccuracies, corporate liability & why governance matters

Within this fragmented regulatory environment, the risks of improper AI use are far from theoretical. Consider a Canadian case in which a passenger used a chatbot on an airline’s website and received inaccurate information about pricing. In court, the airline contended that it bore no responsibility for information provided by its chatbot. The tribunal ordered the airline to issue a refund to the passenger.

This case underscores that AI‑enabled interactions are increasingly treated as formal corporate communications, creating exposure when automated outputs diverge from policies. These developments are prompting a closer internal focus on oversight, documentation and cross‑functional coordination, particularly as compliance teams evaluate the reliability of AI‑generated content and its implications across customer engagement, investigations and disputes.

Given these risks, authenticating electronically stored and transmitted information is crucial, particularly for companies with cross-border operations, complex supply chains or significant financial exposure. To mitigate these risks, companies can establish robust internal governance policies, implement proactive privacy and security protocols and ensure employees are aware of emerging threats and historical methods of information manipulation.

Protecting IP through AI governance

Data is often the most fundamental component of AI performance, offering unique opportunities for asset recognition, protection, valuation and monetization. This is especially relevant, given that data integrity directly supports the value of a company’s IP. For compliance teams and their outside counsel, the IP dimension of data handling and AI governance carries meaningful regulatory, contractual and litigation risk.

Weak provenance controls, unclear licensing rights and undocumented data flows can undermine internal controls, complicate regulatory responses and create issues in transactions and investigations where organizations must demonstrate that sensitive or proprietary data was collected, used and safeguarded appropriately. This includes ensuring that cross‑border data‑use and licensing agreements clearly define and restrict the use of proprietary or third‑party datasets in AI development, as improper use can trigger contractual violations and expose trade‑secret vulnerabilities. The data and computational resources that enable AI model training can themselves constitute protectable trade secrets, and the algorithms used for learning, prediction and generation may also be protected through trade secret, copyright or patent regimes. Because information used to train large language models loses confidentiality and exclusivity once processed, these risks extend to downstream enforcement and licensing. Strict access controls are therefore essential when handling finite or one‑time‑use data.

Proprietary data requires robust protection beyond watermarking for tracking purposes, starting with a clear understanding of the data room and continuous monitoring of when and how data exits that controlled environment. Emerging solutions include secure user-controlled environments that enable data owners to share and license access without transferring or duplicating the underlying asset. These environments enable third parties to perform analysis, validation or model training within a contained framework, preserving confidentiality and ensuring compliance with use and licensing terms.

This is especially critical during high-stakes transactions such as M&A, where sensitive data must be validated without direct exposure. In these scenarios, third-party validators can conduct due diligence within secure environments, maintaining data integrity while facilitating commercial engagement. By combining traceability, containment and controlled access, companies can protect proprietary data while unlocking its full economic potential.

Ultimately, companies that aim to derive value from their data assets must first understand what data they hold, how it is accessed and the obligations or restrictions attached to it, particularly where data‑use rights and licensing terms create compliance exposure. To balance opportunity with risk, companies must rely on secure environments that support data protection and controlled licensing.

Strengthening data authentication in the AI age

As AI reshapes how information is created and shared, the ability to verify data authenticity has become a cornerstone of reliance and compliance. Digital forensics plays a critical role in litigation, regulatory investigations and corporate transactions.

With AI tools creating digital images, documents and, in some instances, hallucinations, it is critical for those seeking to collect, evaluate and present data as evidence in litigation to authenticate the origin of the data they rely on. For compliance teams, the ability to authenticate data is increasingly tied to regulatory expectations, influencing how organizations document controls, substantiate reporting and demonstrate adherence to privacy and security requirements. Moreover, as AI-generated materials become harder to distinguish from authentic data, ensuring that systems handling sensitive information follow defensible, auditable procedures is becoming a core compliance concern.

Forensic computer images, for example, use hash verification to ensure that the information captured is not altered after imaging. The evolution of AI, resulting in more realistic documents and images, has outpaced early AI-generated content-detection tools. Organizations must therefore adopt a layered approach to authentication, combining forensic validation with corroboration from independent sources and third-party attestations. This is particularly vital during M&As, where companies inherit not only data but also governance frameworks or gaps. Assessing inherited privacy, security and compliance structures, eliminating redundancies, and aligning practices with global standards are critical steps to mitigate risk.

This material was adapted with permission from an article first published by J.S. Held.