Become a member

Get the best offers and updates relating to Liberty Case News.

― Advertisement ―

spot_img
HomeIPRMENT LAWUnpacking the Amendments to the IT Rules – SpicyIP

Unpacking the Amendments to the IT Rules – SpicyIP


Recently, the Ministry of Electronics and Information Technology (MeitY) has notified the amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. These amendments are intended to address issues related to ‘synthetically generated information’ and deepfakes. In this post, Vikram Raj Nanda breaks down the amendments and discusses what has changed from the earlier draft, as well as their possible impacts. Vikram Raj Nanda is a third-year student at National Law School of India University, Bengaluru, with a keen interest in IP law, Competition Law, and Arbitration. His previous posts can be accessed here.

Regulating Artificially Generated Media: Unpacking the Amendments to the IT Rules

By Vikram Raj Nanda

Recently, the Ministry of Electronics and Information Technology (MeitY) has notified the amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, introducing obligations for ‘synthetically generated information.’ The draft amendments came out last year and have been discussed previously here. The notified amendments contain significant changes over the draft amendments released earlier. In this post, I’ll attempt to map out these changes step-by-step and discuss the consequences that may arise from these changes.

I. Defining ‘Synthetically Generated Information’

The amendment in its current form has significantly expanded the definition of what constitutes ‘synthetically generated information’ [Rule 2(1)(wa)]. The principle clause now reads: 

‘synthetically generated information’ means audio, visual or audio-visual information which is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that such information appears to be real, authentic or true and depicts or portrays any individual or event in a manner that is, or is likely to be perceived as indistinguishable from a natural person or real-world event;
Screenshot of Rule 2(1)(wa) defining synthetically generated information

There are two main changes here. Firstly, unlike the draft rules, the amended provision specifically includes audio, visual, or audio-visual information, which provides for a relatively more precise definition. Further, similar to the draft guidelines, the notified provision proscribes algorithmically or artificially created content that mirrors or replicates a natural person or a real-event. This definition may not be just limited to deepfakes and is far broader, allowing mere realistic depictions of ‘any person’ or event to fall within its ambit, without any standard of harm being stipulated. But when read in conjunction with Rule 3(3)(i)(IV) (due diligence obligation on intermediary against misrepresentation of a natural person or an event which is likely to cause deception), it becomes apparent that such descriptions may be contraventions only when they are ‘deceptive’. I shall discuss this part in more detail later. 

As noted previously (see here), this overbroad definition of synthetic content may have included benign and routine digital practices. Significantly, in an attempt to limit this, the new rule contains a proviso listing all such routine practices that do not fall within the ambit of ‘synthetic generation’ by introducing three key exceptions:  (i) content that is made in “routine”, “good faith editing” that does not materially alter or misrepresent the content, (ii) routine creation of documents that does not result in the creation any ‘false’ documents or (iii) using a ‘computer resource’ to enhance existing content. This marks a positive step, as the earlier draft was extraordinarily overbroad in its ambit.

II. Obligations on Intermediaries

The notified rules impose stringent obligations on intermediaries, requiring a much more pro-active approach on their part. These are listed in extensive detail throughout Rules 3(1)(c), (ca), (cb) and the due diligence clause in Rule 3(3). 

(i) Rule 3(1)(c): Periodic User Notification 

Rule 3(1)(c) requires intermediaries, at least once every three months, to inform users through their rules, privacy policies, or user agreements that:

an intermediary shall periodically inform its users, at least once every three months, in a simple and effective manner through its rules and regulations, privacy policy, user agreement, or any other appropriate means, in English or any language specified in the Eighth Schedule to the Constitution, that—
(i) in case of non-compliance with such rules and regulations, privacy policy or user agreement, by whatever name called, it has the right to terminate or suspend the access or usage rights of the users to the computer resource
immediately, or to remove or disable access to non-compliant information, or both, as the case maybe;(ii) where such non-compliance relates to the creation, generation, modification, alteration, hosting, displaying, uploading, publishing, transmitting, storing, updating, sharing or otherwise disseminating of information in contravention of any law for the time being in force, the user who is responsible for such non-compliance may be liable to penalty or punishment under the provisions of the Act or any other applicable law; and
Screenshot of Rule 3(1)(c)

This provision is framed more as a user-facing disclosure obligation. However, what is troubling is that it allows intermediaries to normalise termination and removal of accounts as routine consequences of non-compliances of its user agreements or privacy policies. This may become problematic when read in light of the other rules. 

(ii) Rule 3(1)(ca): Obligations for Synthetic Content Generation Platforms

Rule 3(1)(ca) applies specifically to intermediaries offering a ‘computer resource’ that enables or facilitates the creation of synthetically generated information. It requires such intermediaries to additionally inform users that directing or using such tools in contravention of Rule 3(3) (which lists out due diligence obligations imposed on intermediaries) may attract penalties under a range of statutes, including the Bharatiya Nyaya Sanhita, POCSO, the Representation of the People Act, and other allied legislation [Rule 3(1)(ca)(i)]. 

The rule specifies the consequences of such contraventions [Rule 3(ca)(ii)]. These include:

any such contravention of sub-clause (i) of clause (a) of sub-rule (3) may lead to— 
(I) the immediate disabling of access to or removal of such information;
(II) suspension or termination of the user account of the user who violates this sub-rule without vitiating the evidence;
(III) in accordance with applicable law, identification of such user and disclosure of the identity of the violating user to the complainant, where such complainant is a victim of, or an individual acting on behalf of a victim of ,such contravention; and
(IV) where such violation relates to the commission of an offence under any law for the time being in force, including the Bharatiya Nagarik Suraksha Sanhita, 2023 (46 of 2023) or the Protection of Children from Sexual Offences Act, 2012 (32 of 2012) which requires such offence to be mandatorily reported, reporting of such offence to the appropriate authority in accordance with the provisions of the applicable law;
Screenshot of Rule 3(ca)(ii)]

Further, Rule 3(1)(cb) mandates that where an intermediary becomes aware – either on its own or upon receipt of a grievance – of violations relating to synthetically generated information, it must take ‘expeditious and appropriate action’, including the measures listed in Rule 3(1)(ca)(ii).

There are a few concerns that arise here. The first, and perhaps the most significant, is the rule mandating disclosure of the accused’s identity to the complainant [Rule 3(ca)(ii)(III)]. Notably, there is no judicial or even executive oversight in this mechanism. This may unfairly result in misuse of this provision against people, subjecting them to harassment, intimidation or doxxing, thereby compromising their privacy.  

Further, the provision continues to rely on the term ‘computer resource’, applying only to intermediaries that provide such a resource. This terminology may be conceptually imprecise in the context of generative AI systems. As previously discussed on the blog (see here), the difficulty here is one of autonomy – AI systems that generate synthetic information operate far more autonomously than traditional computing tools. The IT Act appears to envisage ‘computer resources’ as performing structured, deterministic functions, where outputs are predictable because they are pre-programmed. For instance, a traditional image-editing tool functions strictly within coded parameters and cannot create content beyond its design. By contrast, AI systems generate variable outputs shaped by probabilistic modelling and training data and may evolve post-deployment in ways not directly controlled by deployers. It is therefore difficult to comfortably subsume such adaptive processes within the traditional conception of a ‘computer resource’.

III. Truncated Timelines and Due Diligence Obligations – Too Stringent? 

What raises more serious concerns is the severely truncated timeline imposed upon intermediaries to remove prohibited content. Government orders mandating takedown of content have to now be complied with within a drastically reduced window, down from over 36 hours to merely 3 hours. User instituted complaints now must be resolved within 7 days instead of the earlier 15 day period. Further, takedown of content in Rule 3(2)(b) of the IT rules (pertaining to non-consensual intimate images, sexual content, and electronic impersonation, including morphed images) has to be taken down within a small period of 2 hours. Consequently, if intermediaries are required to remove content within three hours or even less, this presupposes the existence of pre-emptive or proactive monitoring mechanisms for content moderation.

In this regard, it is a common practice then for major platforms to utilise further AI-powered moderation tools to screen and regulate content. However, that in itself is quite concerning as these tools are notoriously error-prone and struggle with context-sensitive determinations (see here). Content that is satirical, parodic, critical, or otherwise context-dependent may be incorrectly flagged. The problem is further exacerbated in multilingual jurisdictions such as India, where automated systems often lack linguistic and cultural nuance to properly understand the underlying context. Given the compressed timelines, intermediaries are structurally incentivised to over-screen or over-remove content to avoid regulatory exposure. The scale, cost, and technical sophistication required to comply with such obligations may disproportionately burden smaller platforms and may act as an entry barrier in this field.

The introduction of Rule 3(3) further significantly expands due diligence obligations imposed upon intermediaries. The rule operates as a tiered framework. Certain categories of content must be proactively prevented/prohibited through ‘appropriate and technical measures’ (with very little guidance on what this constitutes). 

These include content that:  

(I) contains child sexual exploitative and abuse material, non-consensual
intimate imagery content, or is obscene, pornographic, paedophilic, invasive
of another person’s privacy, including bodily privacy, vulgar, indecent or
sexually explicit; or
(II) results in the creation, generation, modification or alteration of any false
document or false electronic record; or
(III) relates to the preparation, development or procurement of explosive material,
arms or ammunition; or
(IV) falsely depicts or portrays a natural person or real-world event by
misrepresenting, in a manner that is likely to deceive, such person’s identity,
voice, conduct, action, statement, or such event as having occurred, with or
without the involvement of natural person;
Screenshot of 3(3)(i)

On the other hand, content not fitting in these categories is not prohibited but must be labelled in a clear and perceivable manner, and such labelling is meant to be embedded with permanent metadata, including a unique identifier.  This is a significant improvement from the draft guidelines, which mandated a 10% watermark on the content area, which bore no real nexus to the stated objective (see generally here). 

Nonetheless, several concerns arise here. First, the scope of prohibited content is framed in ambiguous terms. The reference to ‘false’ documents lacks definitional clarity. What constitutes falsity? At what threshold does an alteration become ‘false’? From an intellectual property perspective, parody, transformative use, and even artistic reinterpretation frequently rely on alteration. The absence of a clearly articulated falsity standard risks suppressing protected expression. 

Second, I shall circle back to the definition of synthetic content discussed above. Content is defined as synthetic on a realism-based standard under Rule 2(1)(wa). It becomes prohibited under Rule 3(3) when it is ‘likely to deceive, such person’s identity, voice, conduct, action, statement, or such event as having occurred, with or without the involvement of natural person’. The difficulty lies in the low and subjective threshold of the stipulated harm. On a plain reading, a realistic representation of a person or event, combined with a mere potential for deception, may constitute a contravention. However, realism is not coterminous with deception. A realistic depiction may be artistic, satirical, or part of other fair uses. Even in the domain of celebrity deepfakes and personality rights, courts usually require for some level of harm to be demonstrated through such depictions (see earlier posts here and here). Liability does not arise merely because a representation resembles or evokes a real person or event. Hence, it remains unclear why such a low threshold is posited. Further, as mentioned, intermediaries may end up relying on further AI tools to screen such content. In this regard, AI systems are not sufficiently sophisticated to reliably assess such perception-based legal standards, and ultimately, this may result in over-screening of content.

IV. Continuing the Safe Harbour Conundrum?

Further, the very question of imposing such extensive obligations can be questioned. As earlier argued (see here), the IT Act follows a conduit model, i.e., only entities that ‘receive, store or transmit’ records are classified as ‘intermediaries’. Specifically, Section 79(2)(b) expressly conditions safe harbor eligibility on the intermediary not initiating transmission, not selecting receivers, and not selecting or modifying information contained in transmissions. If intermediaries are now mandated with such extensive obligations to moderate and screen content, particularly through ‘appropriate and technical measures’, they would cease to be an intermediary due to the simple fact that they are required to exercise editorial judgment capability and also acquire the ability to monitor content through technical measures that is being shared on their platforms. Consequently, Rule 2(1B), which provides that the removal of synthetically generated information as part of reasonable efforts or in response to grievances shall not constitute a breach of safe harbour conditions, sits uncomfortably with the broader statutory scheme, which predicates immunity on more passive role for intermediaries who are expected to lack sufficient information with respect to the legality of the content that is being shared via their platforms. Section 79(3)(b) expressly provides that safe harbor protection does not apply where an intermediary fails to expeditiously remove or disable access to material after receiving actual knowledge that such material is being used to commit an unlawful act, merely making the contradiction more apparent. 

In such a scenario, can rules such as Rule 2(1B) contradict the parent statute and give safe harbour protection when the parent statute contemplates otherwise? Ultimately, the resolution of this tension may depend on judicial interpretation if the provision is challenged.

V. SSMI Obligations

Lastly, the amendment largely reproduces the obligations imposed on Significant Social Media Intermediaries. Under Rule 4(1A), before any content is displayed, uploaded, or published, SSMIs must require users to declare whether the content is synthetically generated. They must additionally deploy ‘appropriate’ technical measures to verify the accuracy of such declarations. Where content is confirmed to be synthetically generated, the intermediary must ensure that it is clearly and prominently labelled as such. The rule further clarifies that SSMIs are responsible for taking reasonable and proportionate technical measures to prevent publication of synthetic content without declaration or labelling, and failure to comply – particularly where the intermediary knowingly permits or fails to act upon such content – may result in contravention of the rules. 

Further, in continuing with the attempt to impose more pro-active regulation on behalf of intermediaries, the amendment to Rule 4(4) of the IT Rules replaces the softer formulation ‘endeavour to deploy technology-based measures’ with a mandatory obligation to ‘deploy appropriate technical measures’ to screen content on their platforms.  This shift converts what was previously framed as a best-efforts standard into a binding compliance requirement. 

In sum, the amended framework represents a decisive regulatory turn in the governance of synthetic content and intermediary responsibility. Whether it strikes an appropriate balance between addressing emerging harms and preserving statutory coherence and expressive freedoms remains to be seen. 



Source link