- with readers working within the Advertising & Public Relations, Pharmaceuticals & BioTech and Retail & Leisure industries
- within Intellectual Property, Government, Public Sector and Privacy topic(s)
The Information Technology framework in India regulates how digital platforms, social media, online gaming services, and intermediaries function. With rapid growth of social media, AI-generated content, and online gaming, earlier rules became insufficient to handle new digital challenges.
Information Technology Rules (Before Amendment)
Before the February 2026 amendments, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 20211 mainly aimed to make the internet safer and more accountable by placing responsibilities on social media platforms and other online intermediaries. These rules required platforms to clearly inform users about acceptable online behaviour and prohibited content. Platforms were also required to remove unlawful or harmful content after receiving orders from courts or government authorities. Special protection measures were introduced for women and children, and large social media companies had to appoint officers in India to handle complaints and ensure compliance. Users were also given a formal system to file complaints if harmful content affected them. Overall, the rules focused mainly on removing illegal content after complaints were received rather than preventing harmful content beforehand.
Key Features Before Amendment:
- Due diligence by platforms: Social media platforms had to publish rules and inform users not to post illegal or harmful content.
- Content removal rules: Platforms had to remove unlawful content within 36 hours after receiving government or court directions.
- Protection for women & children: Content involving private images or non-consensual intimate content had to be removed within 24 hours.
- Extra rules for large platforms: Big platforms (with more than 50 lakh users) had to appoint compliance officers and publish monthly reports showing action taken on complaints.
- Traceability requirement: Messaging platforms could be asked to identify the first sender of harmful messages in serious cases like national security threats or sexual crimes.
- User complaint system: Users could complain to a Grievance Officer, and if not satisfied, appeal before a government-appointed appellate body.
- Reactive approach: The system mainly worked after harmful content appeared, rather than preventing it beforehand.
Challenges Faced by IT Rules Before the February 2026 Amendment
Despite creating a framework for regulating online content, the IT Rules, 2021 faced several practical and legal challenges due to rapid technological changes, especially with the rise of artificial intelligence and deepfake technologies. One major concern was that harmful or misleading content often spreads within minutes on social media, making the earlier 36-hour content removal timeline insufficient to prevent damage. The rules also lacked clear mechanisms to regulate AI-generated or synthetically created content such as deepfake videos and voice cloning, creating regulatory gaps in tackling modern digital threats.
Another significant issue was the absence of mandatory labelling standards for AI-generated content, making it difficult for ordinary users to distinguish between genuine and manipulated media. Additionally, many users found the grievance redressal system slow or ineffective, often forcing them to approach courts for relief when platform responses were unsatisfactory. The rules were also legally challenged in several courts on the grounds that certain provisions might restrict freedom of speech or go beyond the authority granted under the parent IT Act.
Further, the requirement for messaging platforms to identify the first originator of certain messages raised privacy concerns, as critics argued it could weaken end-to-end encryption protections. Finally, platforms themselves faced operational difficulties in moderating the enormous volume of online content, often struggling to differentiate harmful material from legitimate satire, parody, or creative expression without advanced technological tools.
Major Key Challenges faced were:
- Harmful content spreads faster than platforms could remove it.
- No strong rules existed to regulate AI-generated or deepfake content.
- Users could not easily identify fake or AI-generated media.
- Complaint systems were slow and often ineffective.
- Rules faced court challenges over free speech concerns.
- Traceability rules raised privacy and encryption concerns.
- Platforms struggled to moderate massive amounts of online content accurately.
Overall, these challenges highlighted the need for stronger and updated regulations, eventually leading to the February 2026 amendments that aimed to address these growing digital risks more effectively.
Need for Amendment
Need for Amendment (Leading to the February 2026 Changes)
Although the IT Rules, 2021 created an important framework to regulate online platforms and social media, rapid technological developments soon exposed weaknesses in the system. The rise of artificial intelligence, deepfakes, voice cloning, and synthetic media made it easier to spread misinformation, commit online fraud, and misuse personal images or identities. The earlier rules were mainly designed to deal with traditional harmful content and were not strong enough to handle these new digital risks.
One major issue was the speed at which harmful content spreads online. Fake videos or manipulated media could go viral within minutes, influencing public opinion, damaging reputations, or causing panic. However, platforms were allowed up to 36 hours to remove unlawful content after receiving government or court orders. By that time, the damage was often already irreversible.
Another serious gap was the absence of specific laws dealing with AI-generated or synthetically created content. There were no clear obligations on platforms to detect or regulate deepfakes or voice cloning technologies. At the same time, users had no reliable way to identify whether a video, image, or audio clip was real or artificially generated, because platforms were not required to label AI-created content or attach digital identification markers.
Users also faced problems with the grievance redressal system, which was often slow or ineffective. Many people had to approach courts directly when their complaints were not resolved properly by platforms. In addition, certain provisions of the rules were legally challenged in courts for possibly restricting freedom of speech or exceeding powers granted under the parent IT Act.
Privacy concerns also emerged due to the rule requiring messaging platforms to identify the first originator of certain messages in serious cases. Critics argued that such traceability requirements could weaken end-to-end encryption and threaten user privacy.
Another practical difficulty was the massive scale of online content, making it difficult for platforms to accurately differentiate between harmful material and legitimate satire, parody, journalism, or creative expression without advanced technology.
The February 2026 amendment2 was therefore introduced to address what many described as a period of poorly regulated AI and digital manipulation. Deepfakes were increasingly being used for financial fraud, political misinformation, identity misuse, and non-consensual intimate imagery, creating urgent need for stricter regulation.
To prevent harmful content from going viral, the amendment introduced much faster takedown timelines, including a three-hour removal requirement for government or court-declared unlawful content and even faster action for highly sensitive deepfake material.
The amendment also aimed to increase transparency and accountability by requiring platforms to introduce labelling and digital fingerprinting mechanisms so users could distinguish between real and AI-generated content. Concerns about manipulation during elections through AI-cloned voices and fake videos further strengthened the need for regulation to protect democratic processes.
Additionally, the amendment clarified legal responsibilities of platforms by stating that failure to comply with new due diligence obligations could result in the loss of safe harbour protection, making platforms legally responsible for harmful content hosted on their services. The update also aligned references from the old Indian Penal Code to the newer Bharatiya Nyaya Sanhita, 2023,3 ensuring consistency with India's modern criminal law framework.
In simple terms, the amendment became necessary because the earlier rules were not strong enough to deal with modern digital threats. The new changes aim to create a safer online environment while balancing freedom of expression, accountability of platforms, and protection of users from digital harm.
IT Rules After Amendment (Post–February 2026 Framework)
The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, notified on 10 February 2026 and brought into force from 20 February 2026 by Ministry of Electronics and Information Technology, mark a significant shift in India's digital regulation policy. Unlike the earlier framework that mainly reacted after harmful content appeared online, the amended rules move toward proactive regulation, especially addressing risks created by artificial intelligence and deepfake technologies.
The amendment recognizes that modern digital threats spread extremely quickly and therefore introduces stricter obligations, faster enforcement timelines, and clearer accountability mechanisms for social media platforms and other intermediaries.
1. Regulation of Synthetically Generated Information (SGI)
For the first time in India, the law formally recognizes Synthetically Generated Information (SGI)4 - content such as videos, images, or audio that is created or altered using artificial intelligence but appears real.
To prevent misuse of such technology, the amendment introduces:
- Mandatory labelling: AI-generated content must clearly display visible labels so users know that the content is artificial.
- Audio disclosures: AI-generated audio must include clear voice disclosures so listeners understand it is synthetic.
- Traceability measures: Platforms must embed permanent digital identifiers (metadata or fingerprints) into AI-generated files to help trace their origin if misuse occurs.
- Tampering prohibited: Removing or disabling these identifiers is strictly prohibited.
- Reasonable exemptions: Normal editing practices like colour correction, noise reduction, accessibility tools, or academic training material that do not create false impressions are excluded from SGI regulation.
In simple terms, users should now be able to tell whether content is real or AI-generated.
2. Faster Takedown and Complaint Resolution Timelines
One of the biggest changes addresses how fast harmful content spreads online. To prevent viral misinformation or abuse, platforms must now act much faster:
- 3-hour takedown rule: Platforms must remove or block unlawful content within three hours of receiving a government or court order (earlier allowed 36 hours).
- 2-hour emergency removal: Highly sensitive content such as deepfake nudity or intimate imagery must be removed within two hours.5
- Faster complaint handling: User complaints must now be acknowledged and resolved within seven days instead of fifteen.
- Urgent cases: Complaints related to identity theft or serious harm must be resolved within 36 hours.
This ensures that harmful content is controlled before it spreads widely.
3. Stronger Responsibilities for Large Platforms (SSMIs)
Platforms with more than five million users, classified as Significant Social Media Intermediaries (SSMIs), now face stricter duties even before content becomes public.
New obligations include:
- User disclosure: Users uploading content must declare whether it is AI-generated.
- Technical verification: Platforms must use automated tools to verify such declarations before allowing publication.
- Risk of liability: If platforms fail to label AI-generated content or miss the strict takedown timelines, they lose safe harbour protection under Section 79 of the IT Act.6 This means they can become legally responsible for harmful user content.
This change makes platforms more accountable rather than allowing them to simply react after damage occurs.
4. Regular User Awareness Notices
Platforms must now remind users every three months - instead of once a year, about platform rules and the legal consequences of posting unlawful content including possible criminal liability under updated criminal laws. This step aims to increase public awareness and encourage responsible online behaviour.
Overall Impact in Simple Terms
In simple language, the new amendments mean:
- AI-generated fake videos and audio are now directly regulated.
- Users must be informed when content is artificial.
- Harmful content must be removed much faster.
- Complaint systems must respond quicker.
- Large platforms must check content more carefully.
- Platforms can be held legally responsible if they fail to follow the rules.
Overall, the amendment moves India's digital regulation from a reactive system to a preventive and accountability-driven framework, aiming to protect users while maintaining transparency and trust in online spaces.
Comparative Analysis
| Aspect | Before Amendment | After Amendment |
|---|---|---|
| Approach | Reactive removal of content | Proactive regulation of AI content |
| Takedown timeline | 36 hours | 3 hours/ 2 hours for urgent cases |
| AI/Deepfake regulation | Not clearly regulated | Mandatory labelling & traceability |
| User complaints | 15 days resolution | 7 days, urgent in 36 hrs |
| Platform liability | Limited | Higher liability for non - compliance |
| Platform duties | Basic Compliance | Pre-publication checks & verification |
The February 2026 amendments significantly increase compliance responsibilities for online platforms by requiring faster removal of unlawful content and proactive regulation of AI-generated media. Platforms must now respond quickly to complaints, verify AI-generated content, and maintain greater transparency to avoid legal liability. For businesses, this means strengthening internal compliance and monitoring systems. Overall, the amendments aim to create a safer and more trustworthy digital environment while ensuring clearer accountability for online intermediaries.
Practical Impact:
The 2026 amendments mark a clear shift from the earlier "wait and act later" model to a system where platforms must actively monitor and respond to harmful content, particularly AI-generated material. The changes affect not only large technology companies but also content creators, users, and regulators. The practical consequences can be understood as follows:
1. Impact on Social Media Platforms and Intermediaries
The amendments significantly increase compliance pressure on digital platforms operating in India. The earlier 36-hour response window has been replaced with a much stricter three-hour timeline for removal of unlawful content upon official direction. Failure to comply can result in loss of safe harbour protection, potentially exposing platforms to direct legal liability for user content.
As a result, many platforms are strengthening automated detection tools to identify AI-generated or manipulated content in real time and are expanding compliance teams to ensure immediate response to government or court orders. Several companies are also establishing round-the-clock response mechanisms in India to handle urgent takedown requests and regulatory communications.
2. Impact on Content Creators and Influencers
Content creators and influencers must now exercise greater caution when using AI tools such as voice cloning, face-swapping, or synthetic video generation. AI-generated or altered content is required to carry proper disclosure, and failure to provide such disclosure may lead to content removal or account penalties.
At the same time, creators working in satire or parody sometimes face practical challenges, as even humorous or artistic content may require labelling if AI tools are used, potentially affecting creative presentation.
3. Impact on General Users
For ordinary users, the amendments aim to provide faster protection against harmful online content. Victims of non-consensual deepfake imagery or similar misuse can now expect quicker removal of such content, reducing the period during which harmful material circulates online.
Users are also likely to see clearer indicators or labels identifying AI-generated or manipulated content, helping them distinguish authentic information from synthetic media and reducing the risk of falling victim to online fraud or misinformation. However, ongoing debates continue regarding privacy implications, particularly where traceability mechanisms may interact with encrypted communication services.
4. Impact on Government Enforcement
From an enforcement perspective, the amendments provide authorities with faster mechanisms to address harmful or unlawful online content, especially during sensitive situations such as elections or public emergencies. The shortened timelines allow quicker intervention to prevent misinformation or harmful material from spreading widely.
Compliance Framework Under the IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026
The 2026 Amendments significantly strengthen the compliance obligations of digital platforms, especially in relation to AI-generated and harmful online content. The focus has shifted from reactive moderation to proactive responsibility, requiring platforms to adopt faster response systems, stronger monitoring tools, and clearer accountability mechanisms. An overview of the compliance framework is set out below:
1. Strict Content Removal Timelines
Platforms must now act within clearly defined timelines:
- Content must be removed within 3 hours when directed by a court order or government authority.
- In urgent situations, such as non-consensual deepfake nudity or intimate imagery, action must be taken within 2 hours.
- User complaints must be acknowledged within 7 days.
- Serious complaints such as identity theft or impersonation must be resolved within 36 hours.
These timelines aim to reduce the viral spread of harmful content and provide quicker relief to victims.
2. Mandatory Labelling and Traceability of AI Content
Where platforms allow AI-generated content:
- Synthetic images and videos must carry clear labels informing viewers that the content is AI-generated.
- Synthetic audio must include audio disclosures or visible warnings.
- Platforms must embed metadata or unique identifiers to help trace the origin of synthetic content.
- Such identifiers must be protected so users cannot remove or alter them.
Additionally, platforms must automatically block or prevent uploads involving:
- Child sexual abuse material (CSAM), and
- Non-consensual intimate imagery.
3. Additional Duties for Large Platforms
Platforms with over 5 million users in India face stricter responsibilities:
- Users must declare whether uploaded content is AI-generated.
- Platforms must deploy automated tools to verify such declarations.
- Failure to comply risks losing Section 79 Safe Harbour protection, exposing the platform to direct legal liability.
4. Transparency and Reporting Obligations
To ensure user awareness and regulatory accountability:
- Platforms must send periodic user notifications explaining platform policies and legal consequences of misuse.
- Legal references must be updated to reflect new criminal law provisions, including replacing references to the IPC with the Bharatiya Nyaya Sanhita, 2023.
- Serious offences involving synthetic content must be reported immediately to authorities.
- Detailed records of content takedowns must be maintained for at least 180 days.
5. Overall Compliance Expectation
The amendments make it clear that platforms are no longer mere intermediaries but are expected to actively safeguard the digital ecosystem. Compliance now requires:
- Real-time moderation capabilities,
- Robust user grievance systems,
- Clear AI-content identification mechanisms, and
- Continuous monitoring and reporting practices.
The Way Forward
Looking ahead, several developments are likely to shape how these rules operate in practice:
1. Technology Will Drive Compliance
Platforms will need to move beyond basic filters and invest in advanced AI-detection systems capable of identifying synthetic content automatically. Industry adoption of global content authenticity standards and metadata verification tools will become essential rather than optional.
2. Courts Will Play a Crucial Role
As disputes arise, courts will need to clarify where legitimate creative expression ends and harmful synthetic manipulation begins. Clear judicial guidance will be necessary to protect satire, parody, and artistic freedom while preventing malicious misuse.
3. Need for Global Coordination
Since AI-generated content easily crosses borders, India's regulatory push may encourage international cooperation on traceability and authenticity standards so that safeguards work consistently across jurisdictions.
4. Public Awareness Must Complement Regulation
Rules alone cannot eliminate misinformation. Public awareness campaigns will be necessary so users learn to recognize AI labels and authenticity indicators just as easily as they recognize verified accounts today.
5. Transition Toward a Comprehensive Digital Law
These amendments are widely seen as an interim step toward the forthcoming Digital India Act, which is expected to create a broader and more permanent framework governing online platforms, digital rights, and emerging technologies.
Footnotes
1. Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021
2. Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, notified by MeitY on 10 February 2026.
3. Bharatiya Nyaya Sanhita (BNS), 2023, Government of India - replaces IPC references in digital content regulation.
4. IT Rules Amendment, 2026 - provisions relating to Synthetically Generated Information (SGI)
5. IT Rules Amendment, 2026 - amended due diligence obligations regarding expedited takedown timelines.
6. Section 79, Information Technology Act, 2000 - Safe Harbour protection for intermediaries.
BY
Vijay Pal Dalmia, Advocate
Supreme Court of India & Delhi High Court
Email id: vpdalmia@vaishlaw.com
Mobile No.: +91 9810081079
Linkedin: https://www.linkedin.com/in/vpdalmia/
Facebook: https://www.facebook.com/vpdalmia
X (Twitter): @vpdalmia
© 2026, Vaish Associates Advocates,
All rights reserved
Advocates, 1st & 11th Floors, Mohan Dev Building 13, Tolstoy
Marg New Delhi-110001 (India).
The content of this article is intended to provide a general guide to the subject matter. Specialist professional advice should be sought about your specific circumstances. The views expressed in this article are solely of the authors of this article.