ARTICLE
18 February 2026

Regulation Of AI-Generated/Deepfake Content And Synthetically Generated Information (SGI) In India-New Rules

VA
Vaish Associates Advocates

Contributor

Established in 1971, Vaish Associates, Advocates is one of the best-known full-service law firms in India. Since its inception, it continues to serve a diverse clientele, including domestic and overseas corporations, multinational companies and individuals. Presently, the Firm has its operations in Delhi, Mumbai and Bengaluru.
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 ("2026 Amendment Rules") mark a significant regulatory shift in India's digital governance framework.
India Technology
Vijay Pal Dalmia’s articles from Vaish Associates Advocates are most popular:
  • in United States
  • with readers working within the Advertising & Public Relations, Pharmaceuticals & BioTech and Retail & Leisure industries
Vaish Associates Advocates are most popular:
  • within Government, Public Sector, Privacy and Intellectual Property topic(s)

Article by Vijay Pal Dalmia, Advocate, Supreme Court of India and Delhi High Court, Partner & Head of Intellectual Property Laws Division, Vaish Associates Advocates, India

The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 ("2026 Amendment Rules") mark a significant regulatory shift in India's digital governance framework. Notified on 10 February 2026 and effective from 20 February 2026, these amendments primarily address the regulation of synthetically generated information (SGI)—commonly referred to as AI-generated or deepfake content.

The amendments strengthen due diligence obligations for intermediaries, especially significant social media intermediaries (SSMIs), and introduce strict timelines for compliance.

The 2026 amendments aim to:

  • Regulate AI-generated/deepfake content.
  • Prevent misuse of synthetic media for fraud, impersonation, obscenity, misinformation, and criminal activity.
  • Mandate labelling and traceability of synthetic content.
  • Tighten intermediary compliance timelines.
  • Align references from IPC to Bharatiya Nyaya Sanhita, 2023.

This is India's first comprehensive regulatory framework specifically targeting synthetic digital manipulation at scale.

"Synthetically Generated Information"

A major addition is the formal definition of:

(a) Audio, Visual or Audio-Visual Information

Expanded to include any content created, generated, modified, or altered using computer resources.

(b) "Synthetically Generated Information" (SGI)

Defined as AI-created or algorithmically altered content that:

  • Appears real or authentic,
  • Depicts individuals or events,
  • Is likely to be perceived as indistinguishable from real-world events .

Exclusions

Routine editing, formatting, accessibility improvements, color correction, or legitimate document preparation are excluded—provided they do not materially distort the underlying content.

Implication: The law clearly separates deepfake manipulation from legitimate digital enhancement.

Expansion of "Information" to Include Synthetic Content

The amendment clarifies that any reference to "information" under unlawful activity provisions shall include synthetically generated information.

This ensures:

  • Deepfakes are treated on par with real content under IT Act liability provisions.
  • Intermediaries cannot argue regulatory gaps.

Mandatory User Awareness Requirements

Intermediaries must now:

  • Inform users every three months about legal consequences of misuse.
  • Warn about penalties under:
    • Bharatiya Nyaya Sanhita, 2023
    • POCSO Act
    • Representation of the People Act
    • Indecent Representation of Women Act
    • Immoral Traffic Prevention Act .

Users must be informed that violations may result in:

  • Immediate content removal.
  • Account suspension.
  • Identity disclosure to victims.
  • Mandatory reporting to authorities.

Due Diligence Obligations for Synthetic Content Platforms

Platforms that enable AI content creation must:

(A) Prevent Illegal SGI

Deploy automated tools and reasonable technical measures to prevent the creation of synthetic content that:

  • Contains child sexual abuse material.
  • Is obscene, pornographic, or invasive of privacy.
  • Creates false documents or electronic records.
  • Aids in explosives or arms procurement.
  • Falsely depicts individuals or events to deceive.

(B) Mandatory Labeling

All lawful SGI must:

  • Be prominently labelled.
  • Contain visible disclosure (for visual content).
  • Include prefixed disclosure (for audio).
  • Embed permanent metadata or provenance markers.
  • Include a unique identifier tied to the creating platform.

Platforms cannot allow removal or suppression of such labels. This introduces a technical traceability regime for AI content.

Obligations of Significant Social Media Intermediaries (SSMIs)

Before publishing user content, SSMIs must:

  1. Require users to declare whether content is synthetic.
  2. Deploy verification tools to validate declarations.
  3. Ensure labelling if the content is confirmed synthetic.

If the platform knowingly permits unlabeled synthetic content, it will be deemed to have failed due diligence. This shifts liability exposure significantly upward.

Tightened Compliance Timelines

The amendment reduces key timelines:

Provision Earlier Now
Content removal after government order 36 hours 3 hours
Grievance resolution 15 days 7 days
Certain urgent removals 24 hours 2 hours

Safe Harbour Clarification

The amendment clarifies that:

  • Removing or disabling access using automated tools,
  • Acting upon awareness of violations,

will not amount to breach of safe harbour protections under Section 79 of the IT Act. This legally protects proactive moderation.

Replacement of IPC Reference

All references to Indian Penal Code are replaced with Bharatiya Nyaya Sanhita, 2023 .

This harmonizes the IT Rules with the new criminal code framework.

Legal and Regulatory Impact

(A) On Social Media Platforms

  • Mandatory AI detection systems.
  • High compliance burden.
  • Increased liability risk.
  • Faster takedown obligations.

(B) On AI Tools and Generative Platforms

  • Must embed watermarking or metadata.
  • Cannot allow deepfake misuse.
  • Must prevent illegal outputs at the generation level.

(C) On Users

  • Criminal exposure for malicious deepfakes.
  • Reduced anonymity if violations occur.
  • Increased traceability.

Policy Significance

The 2026 Amendment Rules:

  • Represent India's most stringent deepfake regulation.
  • Introduce technical compliance standards for AI.
  • Combine content moderation with metadata traceability.
  • Shift from reactive moderation to preventive architecture.

India now formally regulates not just harmful content, but the mechanism of creation itself.

The IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 fundamentally reshape digital compliance in India.

By:

  • Defining synthetic information,
  • Mandating labelling and metadata,
  • Tightening removal timelines,
  • Expanding intermediary liability,
  • And aligning with the Bharatiya Nyaya Sanhita,

the government has moved decisively toward regulating AI-generated content ecosystems.

For platforms operating in India, compliance will require:

  • AI moderation infrastructure,
  • Provenance tagging systems,
  • Real-time content risk detection,
  • Stronger legal and compliance governance.

By
Vijay Pal Dalmia, Advocate
Supreme Court of India & Delhi High Court
Email id: vpdalmia@vaishlaw.com
Mobile No.: +91 9810081079
Linkedin: https://www.linkedin.com/in/vpdalmia/
Facebook: https://www.facebook.com/vpdalmia
X (Twitter): @vpdalmia

© 2025, Vaish Associates Advocates,
All rights reserved
Advocates, 1st & 11th Floors, Mohan Dev Building 13, Tolstoy Marg New Delhi-110001 (India).

The content of this article is intended to provide a general guide to the subject matter. Specialist professional advice should be sought about your specific circumstances. The views expressed in this article are solely of the authors of this article.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More