- within Technology topic(s)
- with readers working within the Property and Law Firm industries
- within Technology, Privacy and Real Estate and Construction topic(s)
- with Senior Company Executives, HR and Inhouse Counsel
I. Introduction
The digital ecosystem has changed because to the rapid spread of generative AI technologies, which can produce incredibly realistic text, images, audio, and video material. These developments resulted in new forms of harm, such as deepfakes, impersonation, misinformation, non-consensual intimate images, and other malicious applications that can undermine individual rights, public trust, democratic processes, and social cohesion, even as they facilitate creative expression, innovation, and economic opportunity.
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 ("IT Rules 2026" or "Amendment Rules") were announced by India's Ministry of Electronics and Information Technology (MeitY) on February 10, 2026, in response to this changing threat landscape. Enforcement of these regulations will begin on February 20, 2026.
These changes provide the first all-encompassing legislative framework in India that specifically targets synthetic content and outlines precise responsibilities for digital intermediaries in an AI-driven setting. In addition to strengthening current intermediary due diligence requirements under the Information Technology Act of 2000 and the 2021 Rules, the Rules aim to maintain a balance between innovation, legal accountability, and user safety.
II. Defining "Synthetically Generated Information" (SGI)
The 2026 amendment's official definition and acknowledgement of synthetically generated information (SGI) is one of its main innovations. In general, SGI is described as:
"Information that appears to be reasonably authentic or
true but is artificially or algorithmically generated, modified, or
altered using a computer resource."
AI-generated photos, deepfake movies, artificially produced audio
with or without images, and other content that is indistinguishable
from authentic content fall under this category. To safeguard
legitimate digital innovation, the definition specifically excludes
educational/illustrative content and benign modification (such as
colour correction and accessibility enhancements).
The change places generated content directly under compliance requirements and intermediaries' safe-harbour obligations by integrating SGI into the current framework of "information" under Rule 21(A).
III. Strengthened Platform Obligations and Due Diligence
A. Requirements for Labelling and Metadata
Platforms that permit the production or distribution of SGI must now:
- Should clearly label artifical content with specified audio identification for audio content and conspicuous disclosures for visual content.
- At the time of creation, incorporate persistent metadata or unique identifiers that are unchangeable and resistant to tampering, even if users download or edit the material.
In addition to mirroring suggestions in countries like the European Union (EU) AI Act, which similarly emphasize transparency and content labelling to maintain user trust, these requirements are in line with growing global standards for AI content provenance and traceability.
Mandatory metadata has two purposes (a) it makes it possible for downstream platforms and law enforcement to track the source and type of SGI (b) it makes it easier for users to quickly determine whether content is artificial.
B. SGI Verification and Declarations
Important Social Media Intermediaries are also required to put in place pre-publication
procedures that call for:
- Self-declaration by the user regarding artificial nature of the uploaded content.
- When practically possible, automated or other technical assessment of such declarations.
- Classification and labelling according to the verification's results.
Safe harbour protections under Section 79 of the IT Act, 2000, may be at risk if certain procedures are not implemented, as this could be interpreted as a breach of due diligence. In contrast to previous reactive takedown processes, this essentially places some of the burden of ensuring SGI is recognized and labelled prior to publishing on platforms.
C. Quarterly Transparency and User Awareness
Every three months, intermediaries are required to inform users on the nature of SGI, related rights, liabilities, enforcement procedures, and fines. Despite being procedural, this rule aims to increase user trust and foster digital literacy.
IV. Simplified Grievance and Takedown Procedures
In accordance with the revised Rules:
- The three-hour window for responding to urgent removal notices is much less than the previous 36-hour window.
- Within two hours, some categories of hazardous content must be eliminated.
- Timelines for grievance redress have been shortened from 15 to seven days, and certain situations must be resolved in less than 36 hours.
These faster deadlines are in line with the urgency imposed by rapidly evolving AI misuse and reflect regulatory emphasis on prompt reaction to dangerous digital content. They do, however, also present operational difficulties for intermediaries, particularly smaller organizations, which call for automatic alert systems, real-time monitoring systems, and round-the-clock trust and safety management.
Additionally, law enforcement takedown requests must be authorized in writing and come from officers of at least Deputy Inspector General status, establishing established procedural controls around enforcement.
V. SGI Tools Providers' Due Diligence
Enhanced due diligence is required by Rule 3(3) for organizations that offer services or instruments that make it possible to create or manipulate SGI. These middlemen are required to put in place appropriate and acceptable technical and operational controls, which could include:
- Automated systems for detecting;
- Classifiers and procedures for reducing risk;
- Measures to stop the spread of illegal synthetic content, such as content about child exploitation, non-consensual intimate images, electronic records that have been altered, content about extremism, and content about weaponry.
This extensive coverage is a reflection of the regulatory realization that platform distribution and tool creation ecosystems must be included in SGI control.
VI. Effects by Sector
A. Platforms for AI and Content Creation
Watermarking systems, persistent metadata infrastructures, categorized algorithms, and tracking pipelines will all need to be developed by AI firms and content creation platforms. The necessary investment could be substantial, particularly for smaller businesses. Clearer regulatory requirements, however, might also lessen ambiguity and serve as a basis for compliance and product development plans.
B. Intermediary platforms and social media
To comply with SGI declarations, verification, and labelling requirements, platforms must redesign upload workflows, user interface layers, verification checkpoints, and backend systems. Strong identity disclosure procedures must also be put in place so that victims of SGI abuse can identify suspected offenders through legal channels.
C. Entertainment, Advertising, and News Media
These industries need to make sure that stringent editorial review procedures are in place to identify and flag synthetic content, especially when it comes to politically sensitive or election-sensitive material. To stop the spread of false information, editorial teams and advertising agencies will need to put compliance protocols in place.
D. Emerging Entities and Startups
The Rules provide regulatory clarity that was previously lacking in this area, even if they may result in expenses associated with compliance. Now, startups may organize their legal and operational compliance under clear SGI governance expectations.
VII. India's Regulations in a World Setting
India's strategy aligns with comparable global trends. For example:
- The EU's AI Act places a strong emphasis on risk-based governance and transparency for generative AI systems, including human oversight and labelling for high-risk AI applications.
- China's internet information service regulations now mandate AI content labelling.
- In response to increased international attention of the negative effects of generative AI, regulatory efforts in Malaysia and Indonesia have included barring AI services that do not prevent sexualized AI content.
These worldwide events highlight a shared problem: how to control digital environments powered by AI without restricting free speech or innovation.
VIII. Important Legal and Policy Issues
The IT Rules 2026 have been criticized on a number of fronts despite their extensive scope:
A. Risks of Censorship and Excessive Definitions
Broad definitions of SGI and strict labelling rules, according to civil society and digital rights groups, may unintentionally stifle free speech or result in over-censorship when automated detection techniques make mistakes.
Automated systems frequently have significant mistake rates and may not correctly detect or maintain metadata, which can result in incorrect takedowns or misclassification. This is especially true with modern deepfake detectors.
B. Operational Feasibility
Platforms, particularly smaller mid-tier intermediaries, must maintain 24/7 monitoring and reaction capabilities because to the shortened deadlines for takedowns and grievance resolution. This could tilt innovation in favour of larger tech companies by unfairly burdening organizations with fewer resources.
C. Issues with Due Process and Free Speech
Pre-publication checks, mandatory disclosures, and more due diligence raise concerns about prior restraint, forced speech, and striking a balance between expression and rights protection. It's still difficult to draw a boundary between destructive content and authentic AI ingenuity.
IX. Summary
The IT Rules 2026 are a daring regulatory reaction to the indisputable problems caused by deepfakes, synthetic content, and generative AI. The Rules seek to protect users' rights while reducing detrimental digital behaviours by integrating SGI into India's digital governance framework, enforcing transparency requirements, speeding up content takedowns, and strengthening intermediary due diligence.
Effective implementation, ongoing discussions with business and civil society, and developing law that strikes a balance between innovation, free speech, and public safety are all necessary for their success. The international community of digital policy will keep a close eye on India as it negotiates the challenges of AI regulation to determine if its strategy can serve as a model for responsible AI governance in the digital era.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.