[This is a guest post by Rudraksh Lakra.]
Introduction
On 10th February 2026, the Ministry of Electronics and Information Technology (“MeitY”) notified the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 (“Amendment“). A draft version of this Amendment had earlier been released for public consultation in October. The stated objective of the Amendment is to address the growing misuse of artificial intelligence-generated content, including deepfakes.
The draft Amendment had raised several concerns. Although the final version addresses some of these issues, it ultimately gives rise to more constitutional, policy, and technical questions than it resolves. This blog analyses the Amendment in five parts. First, it examines changes to the content reporting and takedown regime, including new disclosure obligations and significantly shortened compliance timelines. Second, it evaluates the new restrictions on the generation of certain categories of synthetically generated information, with particular attention to concerns of vagueness and overbreadth.
Third, it assesses the revised labelling framework for AI-generated content, highlighting definitional, technical, and comparative regulatory issues. Fourth, it considers whether certain obligations imposed on intermediaries are ultra vires the parent statute. Finally, it examines whether the user declaration and pre-verification requirements amount to an unconstitutional form of prior restraint.
Content reporting and takedown
The amended Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (“IT Rules”), Rule 3(1)(c)(iii) and Rule 3(1)(ca)(ii)(IV) introduce an obligation for intermediaries to report certain offences to the appropriate authority. This duty applies where the information relates to the commission of an offence under applicable law and where such offences are subject to mandatory reporting requirements, like under the Bharatiya Nagarik Suraksha Sanhita, 2023 and the Protection of Children from Sexual Offences Act, 2012. This addresses a critical gap in the IT Rules, as such reporting can enable the appropriate authority to initiate timely investigations, prevent further harm, and ensure legal compliance. In matters concerning child safety, prompt detection and reporting are particularly vital to stopping ongoing abuse and safeguarding vulnerable individuals.
At the same time, Rule 3(1)(ca)(ii)(III) introduces a separate and more contentious requirement. It obligates intermediaries to identify a user and disclose the identity of the alleged violating user to the complainant, where the complainant is the victim of the contravention or a person acting on the victim’s behalf, in accordance with applicable law. Notably, the provision does not require a prior judicial order before such disclosure is made. The absence of a clear judicial safeguard raises serious concerns about potential misuse and may expose users to harassment, intimidation, retaliation, and doxxing, thereby creating significant risks to privacy and personal safety.
The amendments to the content takedown regime raise the most significant concerns. The timelines for takedown have been considerably shortened. For user complaints relating to significant social media intermediaries, the response timeline has been reduced from 15 days to 7 days. For certain categories of sensitive content, the timeframe has been reduced by 50 percent, from 72 hours to 36 hours. Notably, government takedown orders must now be complied with within three hours, reduced from the earlier 36-hour window, and orders issued by the Grievance Appellate Committee must be acted upon within two hours, instead of 24 hours.
The drastic compression of takedown timelines is deeply problematic and operationally unrealistic. Content moderation at scale is an inherently complex process that depends on layered review systems, in which automated detection tools function only as an initial filter. These systems are known to be error-prone, struggle with context-rich content, and perform less effectively with non-Latin languages. Meaningful human review is often essential to assess nuance, satire, political speech, journalistic reporting, and lawful expression that may superficially resemble prohibited content. By forcing intermediaries to act within hours and in some cases mere minutes, in practical terms, the rules create overwhelming pressure to err on the side of immediate removal rather than careful evaluation. This effectively incentivises pre-emptive takedowns, automated over-blocking, and defensive censorship, turning private platforms into risk-averse gatekeepers of speech. The result is a systemic tilt toward over-enforcement, where lawful content is likely to be suppressed simply because there is no time to verify its legality. Such a framework erodes procedural safeguards, chills legitimate expression, and undermines the constitutional balance the Supreme Court sought to protect in Shreya Singhal v. Union of India (2015), which underscored the need for careful scrutiny and a balanced approach to intermediary liability.
Prohibited Content
Rule 3(3)(a) of the IT Rules imposes obligations on the covered intermediaries to curb the generation of certain categories of synthetically generated information. Of the categories listed, those covered in the chapeau of Rule 3(3)(a)(i) and in sub-clauses (I) and (III) relate to content that is sufficiently serious and framed in relatively narrow terms to justify regulatory intervention.
The remaining two categories under Rule 3(3)(a)(i), however, raise substantial concerns because they are framed in terms that are both overly broad and legally indeterminate, while also capturing conduct that may be trivial or socially harmless. The prohibition on generating or modifying any “false document or false electronic record” is particularly problematic because the term “false” (Rule 3(3)(a)(i)(II)) is undefined and could extend far beyond fraud or forgery to include satire, parody, fictional storytelling, and artistic works. Similarly, the restriction on content that “falsely depicts or portrays a natural person or real-world event” in a manner “likely to deceive” (Rule 3(3)(a)(i)(IV)) suffers from ambiguity. It does not clarify whose perspective determines deception, what degree of likelihood is required, or whether intent to mislead must be shown.
Labelling
Rule 3(3)(a)(ii) mandates covered intermediaries to ensure that other forms of synthetically generated information, which are not prohibited, are appropriately labelled. In an earlier blog examining the draft amendment version of this provision, I argued that it amounted to a form of unreasonable compelled or “must-carry” speech obligation. While the objective behind the requirement was legitimate, the specific labelling measures proposed in the draft Amendment were overbroad and unduly burdensome.
These concerns have now been addressed. First, the definition of “synthetically generated information” in the draft was excessively broad. This has been rectified in the final version by excluding content that involves merely assistive functions under a proviso to the definition (Rule 2(1)(wa) proviso). Second, I had argued that the earlier labelling requirement was ineffective because it relied solely on a visible watermark, which is relatively easy to remove. The draft had also arbitrarily required that such a watermark cover 10 percent of the content area condition that bore no meaningful connection to the stated objective of transparency. The final rules remove this rigid 10 percent requirement and instead mandate both visible and non-visible labelling, while leaving the specific technical means to the discretion of intermediaries. Although these revisions largely address the constitutional concerns, material technical and policy challenges remain.
The definition makes it difficult to efficiently implement the labelling requirement. ‘Synthetically generated information’ is defined as “information which is artificially or algorithmically created, generated, modified or altered…in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event.”
The difficulty with this definition, particularly in the context of labelling obligations, is that it requires AI systems processing user requests to make inherently subjective judgments about when content appears real or is likely to be perceived as indistinguishable from reality. Most prominent contemporary AI systems operate through statistical pattern recognition, using neural networks composed of weighted computational nodes. They do not “understand” context, intent, or perception in the human sense. As a result, expecting such systems to reliably assess whether content meets this perception-based legal standard is technically unrealistic and could lead to both over-labelling and under-identification.
Further, the objective of labelling in this context is to distinguish between AI-generated and non-AI-generated content. This goal could be achieved more effectively by removing these subjective qualifiers and defining synthetically generated information more broadly to include any content created by an AI system. Terms such as “artificially” and “algorithmically,” as currently used in the rule, are themselves ambiguous and should be replaced with a clearer definition of AI systems, which, though still challenging, could be drawn from comparative regulatory frameworks and scientific research.
This approach would be more consistent with the labelling framework under the European Union AI Act, where AI-generated content must be labelled unless exceptions apply, like the carve-out for assistive function. (Article 50(2)). The Act defines “deepfake” using similarly subjective qualifiers to those found in the Indian definition of synthetically generated information and imposes targeted transparency obligations in relation to such content (Article 3(60)). However, the EU approach differs in an important respect from the Indian framework.
The labelling obligations under Rule 3(3)(a)(ii) in India appear to be placed primarily on AI providers (EU AI Act terminology), whereas the deepfake-related transparency obligations in the EU framework are directed at AI deployers. Deployers are natural or legal persons who use an AI system under their authority and are responsible for ensuring its proper use, while providers are the entities that develop an AI system and place it on the market (Article 3(3)-(4)).
To take the example of ChatGPT, the provider would be OpenAI, which develops and makes the system available, while the deployer would be any natural or legal person using it. When a business uses ChatGPT to create a deepfake, it is in a position to assess whether the subjective standard in the deepfake definition is met and can accordingly undertake the required transparency measures. By contrast, when a user merely asks a general question to ChatGPT, it cannot reliably determine, given the technical limits of how such systems process and generate information.
Ultra Vires
The labelling requirement under Rule 3(3)(a)(ii) applies to intermediaries that “enable, permit, or facilitate the creation, generation, modification, alteration, publication, transmission, sharing, or dissemination of synthetically generated information.” This language can be interpreted in two ways, both of which are doctrinally problematic.
The first interpretation is that the rule applies to intermediaries that themselves create, generate, modify, or alter synthetically generated information. However, these functions do not fall within the category of activities typically carried out by an intermediary as defined under the Information Technology Act, 2000 (“IT Act”). Intermediaries under the Act are conceived as entities performing an inherently passive and facilitative role, such as hosting, transmitting, or providing access to third-party content. Indeed, Section 79(1) of the IT Act further reinforces this distinction by limiting safe harbour protection to intermediaries acting in such a passive capacity. Section 79(2)(b)(iii) expressly provides that an entity that selects or modifies information contained in a transmission cannot claim intermediary protection. If intermediaries are now expected to generate, modify, or technically intervene in AI outputs, they would move outside the statutory conception of an intermediary. Therefore, it risks being ultra vires to the parent statute.
The second possible interpretation is that the rule applies to intermediaries that merely “enable, permit, or facilitate” the generation or modification of synthetically generated information by others. This reading is equally problematic. Many intermediaries, such as internet service providers and cloud infrastructure providers, lack both meaningful visibility into and practical control over how their services are used, including by AI developers who may generate or manipulate such content.
Declaration
Rule 4(1A)(a) requires users to declare whether the information they upload on a Significant Social Media Intermediary (“SSMI”) platform constitutes synthetically generated information, and it obligates SSMIs to verify these declarations. I had argued earlier, in the context of the draft Amendment, that this provision amounts to an unconstitutional form of prior restraint. That conclusion continues to hold, as there has been no material change to this requirement in the final version of the rule.
This measure operates as a prior restraint because it conditions the publication of speech on prior verification by a private intermediary. The rule’s structure, particularly the requirement that compliance occur “prior to display, uploading, or publication,” indicates that content cannot be made publicly available until it has been assessed and appropriately labelled. In effect, speech is subjected to a screening mechanism before it can reach the public domain. Such a system suppresses expression at its inception, rather than addressing unlawful consequences after publication.
Prior restraint is considered especially suspect under constitutional free speech jurisprudence because it places the burden of pre-clearance on the speaker and creates a strong chilling effect. Speakers may self-censor rather than risk delay, rejection, or scrutiny, particularly when the standards for assessment are vague, and the verification process is opaque. By turning intermediaries into gatekeepers who must approve or condition speech before publication, the rule imposes a structural barrier to expression that is disproportionate to its stated objective. For these reasons, the measure falls within the understanding of prior restraint and is constitutionally vulnerable.
Connecting the dots
The 2026 Amendment responds to a genuine regulatory concern: the rapid proliferation of AI-generated content and the harms that may result from it. Yet the chosen response exposes a deeper structural flaw. Rather than developing a coherent, rights-sensitive framework tailored to the distinct risks and actors within the AI ecosystem, the government has attempted to retrofit AI governance into the existing architecture of intermediary liability under the Information Technology Act. That framework was conceived for passive content hosts, not for regulating the design, deployment, and outputs of complex AI systems. The result is a set of rules that strain statutory boundaries, impose technically unrealistic obligations, and generate significant constitutional risk.
At the same time, the government has been advocating a principle-based, “light-touch” model of AI governance and has repeatedly stated that no new binding AI law is currently required. The Amendment represents a clear departure from that stance. One explanation may lie in the political and media salience of deepfakes, which have drawn disproportionate regulatory focus. However, regulating AI primarily through the lens of deepfakes risks skewing policy priorities and leaves wider structural questions — including accountability across the AI value chain and the protection of fundamental rights — insufficiently addressed.
If India is to regulate AI in a manner that is both effective and constitutionally sound, it will require a dedicated and carefully designed framework that clearly allocates responsibility among different AI actors, is grounded in technical realities, and places fundamental rights at its core. Expanding intermediary liability to fill this regulatory gap is not a substitute for undertaking that task directly.


