When a deepfake video of Rashmika Mandanna went viral in 2023, it did more than shock the media; it exposed a glaring vulnerability in India’s legal landscape. The sophisticated faceswap, which seamlessly superimposed her likeness onto another person’s body, is now a prompt away. Personality rights protect an individual’s name, likeness, image, voice, persona, etc. As deepfakes make fabrication of unbelievably realistic images possible, the potential for abuse, such as fake endorsements and identity theft, has never been higher.
Within a short span, a surge of celebrity cases emerged. Aishwarya Rai Bachchan approached the Delhi HC against pornographic deepfakes and unauthorised merchandise. Karan Johar sought injunctions for AI endorsements, and legendary singers like Kumar Sanu and Asha Bhosale filed suits to protect their vocal distinctiveness. This reflects a deeper crisis of Generative Adversarial Networks (GANs) that create a hyper-realistic synthetic media, and India’s legal framework, a patchwork of IPR, personality rights and tort, struggles to accommodate the uprising of protection of personality rights of these prominent figures.
The Deepfake Threat
To create a deepfake, an AI model would rely on hundreds of images, videos, and audio samples. The AI system analyses patterns in speech, voice, and facial expressions. By leveraging GAN, it generates new content that mimics the original person’s appearance and features.
Source: https://www.realitydefender.com/insights/how-deepfakes-are-made
For celebrities, this harms two dimensions: commercial and reputational. Perpetrators can use deep fake technology to create fake endorsements, as seen in Karan Johar’s case. However, the crisis reflects a gendered dimension. According to a UN Women Report, 90-95% of all deepfakes are non-consensual, with around 90% depicting women. Deepfake pornography makes up 98% of deepfake videos online, and 99% of those feature women. These statistics reflect what scholars call technological patriarchy. Here, AI is weaponised to perpetuate gendered violence and control.
Personality Rights and Deepfakes in the Indian Context
Theoretical Context
In India, personality rights are recognised by the judiciary in a peculiar manner, without statutory codification. Indian courts grant protection for personality rights based on constitutional interpretation and judicial precedent. The conceptual foundation for personality rights rests on two dimensions:
- Right to publicity: The commercial control over one’s name, image, voice, signature, and likeness.
- Right to privacy: The protection of personal autonomy, including informational privacy, as elaborated in the Puttaswamy case.
However, this still leaves a question unanswered.
Is personality protection a derivative of privacy, a species of IPR or an autonomous right?
Case Laws
Judicial interpretation has led to a gradual expansion of Indian courts’ understanding of personality rights. In Rajagopal v. State of Tamil Nadu ((1994) 6 SCC 632), the Supreme Court laid the foundation for privacy on which subsequent claims of personality rights relied.
In Titan Industries Ltd. v. Ramkumar Jewellers (2012:DHC:2845), the Delhi High Court recognised celebrities as the ‘first owners’ of their personality rights. Further, the court noted that a celebrity has the authority to control the use of their image. While this case is relevant to false endorsements, it does not appear directly applicable to AI-generated deepfakes, which complicate proof of authorship and liability. In D.M. Entertainment Pvt. Ltd. v. Baby Gift House (CS(OS) 479/2002 at DHC), the court applied the principles of passing off to personality misappropriation and acknowledged both financial and reputational harm to celebrities.
The recent popularity of deepfakes among the masses has accelerated judicial innovation. The case of Anil Kapoor v. Simply Life India (CS(Comm) 652/2023 at DHC) became India’s first comprehensive case on personality rights. Justice Prathibha Singh, the presiding officer in the case, issued a pioneering John Doe order, restraining 16 entities plus the world at large from misusing the veteran actor’s name, image, voice, and iconic catchphrase Jhakkas through AI-generated GIFs, emojis, and videos.
Similarly, in Amitash Bachchan v. Rajat Negi (CS(Comm) 819/2025 at DHC), the court had passed a John Doe order against fake Kaun Banega Crorepati lottery scams. Similarly, in Aishwarya Rai Bachchan v. Aishwaryaworld.com (CS(Comm) 956/2025 at DHC), the court granted protection against the dissemination of pornographic deepfakes and illicit merchandising. The court noted that it would not turn a blind eye to exploitation.
All these cases involving Anil Kapoor, Amitabh Bachchan, and Aishwarya Rai Bachchan are currently pending before the Delhi High Court for further proceedings.
Statutory Protection
The Information Technology Act, 2000, contains several relevant provisions, including Sections 66E, 67, and 67A. While these provisions cover privacy, obscenity, and sexually explicit content, they do not directly address synthetic content creation and algorithmic manipulation. One can also refer to Section 79 of the Bharatiya Nyaya Sanhita, 2023, which covers a word, gesture, or act intended to insult a woman’s modesty.
The Digital Personal Data Protection Act, 2023, introduces several relevant principles by mandating explicit and informed consent for the collection and processing of personal data under Section 6. Sections 12 and 13 grant control over personal data with the right to correction and deletion. While there are other obligations under this Act as well, its enforcement mechanisms remain nascent, with practical applications to AI-generated synthetic media, and the understanding of what constitutes “processing” is still evolving.
Copyright and Trademark laws provide limited protection. The Copyright Act, 1957, protects authors, not the subjects of the work. So, when a deepfake appropriates someone’s image or likeness, the victim has no copyright standing. Section 57, though, provides moral right of distortion; its applicability to personality appropriation is complicated. Sections 38 and 38A provide the performers with certain rights, but don’t address AI synthesis of their personas. The applicability of the Trade Marks Act, 1999, relevant to the registration of names and catchphrases, enables passing-off claims when unauthorised use creates false endorsements; however, the protection is narrow, commercial-focused, and inaccessible to ordinary citizens.
Conclusion
Overall, there is an inadequacy in the protection of personality rights, as no statute explicitly addresses them as distinct legal interests deserving protection against AI threats. Existing laws address peripheral issues such as privacy violations, obscenity, and trademark passing off, but do not address unauthorised appropriation and manipulation of one’s digital identity by AI. Moreover, the judicial interpretations protect only celebrity rights, leaving common citizens unprotected.
India requires an explicit codification of personality rights distinct from common notions of privacy and IP. It must be universal, not an elite privilege reserved for celebrities. AI-specific provisions addressing deepfakes’ unique challenge are a necessity. Beyond legislative action, institutional capacity-building, such as specialised units and infrastructure to detect deepfakes, as well as legal aid programs, would be beneficial in tackling this issue.
As AI continues to erode distinctions between real and synthetic, it normalises personality theft as an essential feature of digital life. The deepfake crisis exposes a legislative lacuna, with no comprehensive personality rights protection, as well as specific AI-created realities. While courts have protected these rights, judicial interpretations cannot substitute legislative clarity and systematic protection. Statutory codifications must incorporate proactive measures and universal entitlements, not celebrity privileges, especially in protecting women.



