- with Senior Company Executives and HR
- in United States
- with readers working within the Technology and Law Firm industries
1. Introduction
1.1. Statutory Framework and Regulatory Context
The Information Technology Act, 2000 (the "IT Act") empowers the Central Government under section 87 to frame rules governing intermediaries, including the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (the "Intermediary Rules"). The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 (the "2026 Amendment Rules") amend this framework. Section 79 of the IT Act provides safe harbour to intermediaries subject to compliance with due diligence obligations.
The 2026 Amendment Rules respond to the rapid rise of AI-driven synthetic and deepfake audio-visual content, including impersonation-based media. With the increasing use of such technologies, risks such as child sexual exploitation, non-consensual intimate imagery, creation of false documents or electronic records, explosives-related content, and deceptive misrepresentation of a person's identity or real-world events have become more pronounced. The amendments seek to address these emerging harms by strengthening intermediary obligations in relation to such content.
2. The 2026 Amendment Rules – Key Changes
2.1. Scope of the 2026 Amendment Rules
The 2026 Amendment Rules were notified on 10 February 2026 and came into force on 20 February 2026. These amendments modify the Intermediary Rules, including rule 2 (definitions), rule 3 (due diligence by intermediaries), rule 4 (prescribing additional due diligence for Significant Social Media Intermediaries or "SSMI"), and related procedural provisions. The framework applies to all "intermediaries" under the Intermediary Rules. However, certain obligations are specifically triggered where an intermediary offers a computer resource that enables or facilitates Synthetically Generated Information i.e information or content generated by use of AI tools. Additional declaration and verification obligations apply specifically to SSMIs.
2.2. Definition and Treatment of "Synthetically Generated Information"
a) The 2026 Amendment Rules insert two key definitions in rule 2(1):
"Audio, visual or audio-visual information" includes any audio, image, photograph, graphic, video, moving visual recording, sound recording or similar content, whether or not accompanied by audio, and whether created, generated, modified or altered through any computer resource.
"Synthetically generated information" refers to such audio, visual or audio-visual information that is artificially or algorithmically created, generated, modified or altered using a computer resource, in a manner that appears real, authentic or true and depicts individuals or events in a manner that is, or is likely to be perceived as, indistinguishable from a natural person or real-world event.
b) The definition of "synthetically generated information" expressly excludes:
- Routine or good-faith editing, formatting, enhancement, technical correction, transcription or compression that does not materially alter or misrepresent the underlying information;
- Good-faith creation and formatting of documents, presentations, PDFs, educational or research materials that do not result in false documents or electronic records; and
- Use of computer resources solely to improve accessibility, clarity, translation, quality, searchability or discoverability without materially altering the underlying information.
Rule 2(1A) clarifies that references to "information" used to commit an unlawful act under rule 3(1)(b) and (d) and rule 4(2) and (4) shall be construed to include synthetically generated information.
Hence the amendments bring AI driven synthetically generated content specifically within the ambit of the Intermediary Rules.
2.3. New Compliance Obligations
a) Technical Measures
Under rule 3(3)(a)(i), intermediaries offering computer resources that enable Synthetically Generated Information must deploy reasonable and appropriate technical measures, including automated tools or suitable mechanisms, to prevent users from generating or disseminating synthetically generated information that violates any law. Specified categories include:
- Paedophilic, child sexual exploitative and abuse material;
- Non-consensual intimate imagery, bodily privacy;
- Obscene, pornographic or sexually explicit content;
- False documents or false electronic records;
- Content relating to preparation, development or procurement of explosives, arms or ammunition; and
- Content falsely depicting or portraying a natural person or real-world event in a manner likely to deceive.
b) Labelling and Metadata
Under rule 3(3)(a)(ii), Synthetically Generated Information not falling within prohibited categories must be prominently labelled. The label must be prominently visible (or prefixed in audio format) and adequately perceivable, indicating that the content is synthetically generated. Such information must also be embedded with permanent metadata or other appropriate technical provenance mechanisms, including a unique identifier, to the extent technically feasible, identifying the computer resource of the intermediary used to create or alter the content. Intermediaries must not enable modification or removal of such label or metadata.
c) User Declaration – Applicable to SSMIs
Under rule 4(1A), SSMIs must:
- Require users to declare whether uploaded information is synthetically generated;
- Deploy reasonable and appropriate technical measures to verify the accuracy of such declarations; and
- Ensure that confirmed synthetically generated content is clearly and prominently labelled.
These declaration and verification obligations apply specifically to SSMIs, not to all intermediaries.
d) Quarterly User Notices
Rule 3(1)(c) requires intermediaries to inform users, at least once every three months, that:
- Non-compliance with platform rules may lead to suspension, termination or removal of information;
- Users may be liable to penalty or punishment under the IT Act or other applicable laws; and
- Where violations relate to offences under laws such as the BNSS or the POCSO Act that require mandatory reporting, such offences must be reported in accordance with applicable law.
The 2026 Amendment Rules therefore require intermediaries to inform users about legal consequences and mandatory reporting requirements under existing law; they do not create an independent new reporting obligation beyond what is mandated under applicable statutes.
2.4 Takedown Timelines
The 2026 Amendment Rules compress several timelines:
- If an intermediary receives a court order or an authorised government direction to remove content, it must remove or disable access to that content within 3 hours.
- Complaints filed by users must generally be resolved within 7 days, instead of the earlier 15-day period.
- For certain categories of prohibited content under rule 3(1)(b), the intermediary must act much faster and resolve the complaint within 36 hours, except for specific categories that are treated separately under the Rules.
- Complaints involving intimate images or impersonation-based content, such as morphed or deepfake material, must be acted upon within 2 hours of being reported.
These timelines apply only to the specific categories of content and situations identified in the Rules. They do not apply uniformly to all types of content, and the applicable time limit depends on the nature of the complaint or the legal trigger involved.
2.5 Safe Harbour Clarification
Rule 2(1B) clarifies that removal of, or disabling of access to, information (including synthetically generated information), in compliance with rule 3 (including through deployment of reasonable and appropriate technical measures or automated tools), shall not amount to a violation of section 79(2)(a) or (b) of the IT Act. This clarification protects intermediaries' safe harbour where compliant technological moderation measures are deployed in accordance with the 2026 Amendment Rules.
3. The Impact
3.1. Operational Impact
Intermediaries offering AI/synthetic content tools must deploy reasonable technical systems to detect prohibited synthetically generated information, ensure mandatory labelling and metadata embedding, comply with compressed takedown timelines including the reduced 3-hour period for court orders or government intimations by police officers (DIG rank or above) under rule 3(1)(d), and preserve removed content per rule 3. Failure to meet rule 3(3) obligations may signal due diligence lapses, potentially forfeiting safe harbour under section 79 of the IT Act. Intermediaries must also inform users of penalties under the IT Act and other statutes for unlawful synthetically generated information, along with mandatory reporting duties.
SSMIs bear additional rule 4(1A) requirements such as user declarations, technical verification, and pre-publication labelling, where non-compliance risks safe harbour protections.
3.2. Overall Regulatory Shift
Taken together, the 2026 Amendment Rules establish a detailed regulatory framework for synthetically generated audio-visual content within the existing intermediary liability regime. By introducing defined categories of synthetic content, mandating reasonable and appropriate technical measures, requiring prominent labelling and provenance mechanisms, compressing compliance timelines and clarifying safe harbour protection, the 2026 Amendment Rules significantly enhance regulatory expectations for intermediaries operating in India. Timely and demonstrable compliance will be central to retention of statutory safe harbour while navigating the evolving regulatory framework governing AI-generated and synthetic media.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.