By Tanusha Tyagi

The Right to be Forgotten (RTBF) is a concept originally borne out of the understanding that individuals should have control over their personal data. It seeks to protect the privacy rights of individuals in the digital age. While RTBF has gained massive significance in the digital space, the advent of Artificial Intelligence (AI)—which relies on vast datasets to learn, optimize and evolve—has made data removal increasingly complex and, in many cases, technically impossible.

Data deletion is at the forefront of many juridical discussions on the Right to Be Forgotten concept. While data deletion is assumed to be a clear and easy process, there are many practical challenges in actual machine learning environments. Given the rapid advancement of AI and the privacy concerns it raises, it is crucial to examine how the Right to be Forgotten must evolve.

This piece argues that the challenge is no longer whether AI systems store personal data, but how they internalise it. In the age of AI, forgetting is no longer a matter of simple deletion. Even after data is ostensibly removed, its influence may persist within model behaviour, enabling the reconstruction or inference of personal information in subtle and indirect ways. Accordingly, there is a need to rethink what ‘erasure’ should mean in the age of AI.

Conceptual Foundations of the Right to Be Forgotten  

The right to be forgotten consists of two dimensions: the right to forget and the right to delete (including the right to object). The right to forget means ensuring individuals are not permanently bound to their past, thereby safeguarding human dignity by allowing the possibility of being “forgotten”. Where a data subjectdoes not wish their personal data to be processed or stored by a data controller and there is no legitimate reason for maintaining such data, it should no longer be made accessible to the public.

In the digital era, RTBF gained prominence through European jurisprudence, particularly the case of Google Spain v. González, where the Court of Justice of the European Union held that search engines could be required to delist outdated or irrelevant personal information. This principle was later codified in Article 17of the General Data Protection Regulation (GDPR), which established erasure as a legally enforceable right.

The Right to Erasure under India’s DPDP Act

India’s conversation on the Right to Be Forgotten (RTBF) is no longer speculative. With the enactment of the Digital Personal Data Protection Act, 2023 (DPDP Act), the right to erasure has moved firmly into statutory territory. Section 12(3) of the DPDP Act grants Data Principals the right to request the erasure of personal data when it is no longer necessary for the purpose for which it was processed, or when consent has been withdrawn. On paper, this right appears straightforward: data collected must, upon request, be deleted.

However, when this objective is applied to AI systems, particularly large-scale machine learning systems, it becomes evident that the challenge is not one of legal compliance alone, but of policy design. AI systems do not “store” data in the conventional sense. Instead, they learn from data, internalising patterns that shape future outputs. Once trained, AI systems no longer rely on the original datasets in a way that allows for straightforward deletion. As a result, the policy expectation of erasure collides with the technical reality of how AI models function.

The result is a widening gap between regulatory expectations and technical feasibility, one that, if left unaddressed, could either weaken privacy protections or discourage responsible AI development in India.

Machine Unlearning: The Legal-Technical Fault Line

It is precisely at this fault line that the emerging concept of machine unlearning becomes relevant. Machine unlearning enables machine learning models to selectively forget certain data points from a trained AI model that are no longer relevant or may lead to bias. Broadly, two approaches emerge. The first is exact unlearning, which requires deleting the data and retraining the model from scratch. While legally appealing, this approach is practically unviable for large-scale models. Retraining systems such aslarge language models (LLMs) is computationally expensive, environmentally unsustainable, and economically prohibitive. Requiring exact unlearning for every erasure request would make AI deployment legally fragile and operationally impossible.

The second approach is approximate unlearning, where algorithms adjust the model’s internal weights to reduce the influence of specific data. While more feasible, this method offers no absolute guarantee of erasure. At best, it lowers the probability that the model will reproduce or rely on the deleted information.

From a policy perspective, this raises a necessary question: should regulatory compliance require technical perfection, or reasonable effectiveness?

Auditability Challenges in AI Erasure 

Machine unlearning can lead to aggressive erasure mandates, which can result in overcorrection. Forcing AI developers to aggressively remove data influence can degrade model performance, introduce bias, or undermine reliability for all users. In governance terms, this creates a classic policy trade-off between individual rights and systemic functionality.

Moreover, verification itself becomes problematic. Demonstrating that a model has “forgotten” certain data may require retaining reference information for testing purposes, undermining data minimisation principles. These tensions suggest that rigid erasure mandates may paradoxically weaken privacy outcomes rather than strengthen them.

The pursuit of perfect erasure therefore risks destabilising AI systems in ways that neither law nor policy currently anticipates.

Redefining Erasure for AI Systems

For India, this technological reality necessitates a conceptual shift. If erasure under the DPDP Act is interpreted as the complete elimination of every mathematical trace of personal data, most AI systems would be permanently non-compliant. Such an interpretation would effectively prohibit meaningful AI innovation the moment a single Data Principal exercises their rights.

A more pragmatic and future-oriented approach is to treat erasure as a requirement of non- influence rather than literal deletion. Under this framework, compliance would not require absolute extraction of data from model weights. Instead, it would focus on functional outcomes, ensuring that the individual’s data is removed from training datasets, that models are adjusted to prevent retrieval or inference of that data, and that decisions are no longer made about the individual based on prior information.

This notion of functional erasure aligns better with technical feasibility while preserving the normative core of privacy protection.

International Approaches to AI and Erasure

Jurisdictions beyond India are grappling with the same dilemma. The European Union, despite its strong RTBF framework under the GDPR, has yet to provide concrete guidance on how erasure applies to trained AI models. Courts and regulators increasingly acknowledge that deletion obligations cannot be mechanically applied to AI systems without context-sensitiveinterpretation.

India has an opportunity to lead in this space by adopting a pragmatic, innovation-aware approach that balances privacy protection with technological feasibility—a consideration that is particularly important for a country seeking to build domestic AI capacity at scale. India has already started to move in this direction with the release of the AI Governance Guidelines, which highlight trust, human-centric AI, fairness and equity, accountability, safety and resilience, and innovation—underscoring regulation as foundational to India’s ecosystem. By anchoring RTBF to these core principles, India can chart a governance model that protects individual privacy without stifling AI innovation and adoption.

Conclusion: Forgetting, Reimagined

The Right to Be Forgotten was conceived for a digital world that assumed memory could be erased. Artificial Intelligence exposes the limits of that assumption. In India, the challenge is not whether RTBF should apply to AI—it must—but rather how it should be interpreted without rendering lawful AI development impossible.

Machine unlearning reveals that forgetting in AI is not a binary act but a gradual reduction of influence. Recognising this reality does not weaken privacy rights; it strengthens them by anchoring law in technological truth. The future of RTBF lies not in demanding that machines forget perfectly, but in ensuring that they no longer remember in ways that matter.

About the author: Tanusha Tyagi is a Research Assistant at the Centre for Digital Societies, Observer Research Foundation.

Source: This article was published by the Observer Research Foundation.