- with Senior Company Executives, HR and Finance and Tax Executives
- in United States
- with readers working within the Pharmaceuticals & BioTech and Law Firm industries
INTRODUCTION
As part of the Indian government's efforts to regulate the use of artificial intelligence ("AI"), the Ministry of Electronics and Information technology ("MeitY") has issued amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 ("IT Rules"), on February 10, 2026 ("Amendments").1 These Amendments have been issued pursuant to a stakeholder consultation process undertaken by the ministry over a draft version of the Amendments that were issued by the MeitY on October 22, 2025 ("Draft Amendments").2 The Amendments were supplemented by a set of Frequently Asked Questions issued by the MeitY, which are intended to clarify and elaborate on the nuances of the due diligence measures prescribed by the Amendments ("FAQs").3
While MeitY, in the past by way of several advisories issued by it in connection with the IT Rules,4 has attempted to address specific considerations in relation to the use of AI by intermediaries, the Amendments to the IT Rules, are the first concerted attempt by the ministry to bring about significant governance in this regard, the effects of which are far-reaching for intermediaries governed by the IT Rules. In addition to aspects meant to govern content for which AI has been utilised, MeitY has also modified certain timelines prescribed in the IT Rules. These Amendments, which are effective from February 20, 2026, have created a degree of concern and disruption within the industry, given the magnitude of the changes that would have to be incorporated by intermediaries to ensure compliance within a short timeline.
APPLICABILITY OF THE AMENDMENTS
The Amendments primarily deal with synthetically generated information ("SGI"), which in common parlance is understood to mean content that involves utilisation of AI in some form. While recognising that there are benefits associated with the responsible use of SGI, such as fostering innovation and growth, MeitY's primary intent behind the Amendments, as evidenced from the explanatory note to the Draft Amendments5 and the FAQs, is to bring about robust measures that can be adopted (specifically by intermediaries) to curb any misuse of such SGI including by way of spreading misinformation or creating and disseminating deepfakes.
2.1. Nature of Information Within the Ambit of SGI
SGI, as defined within the IT Rules, intends to cover only 'audio, visual, and audio-visual content', which meets the criteria prescribed in the IT Rules.6 By way of examples, photographs, videos, sound recordings would squarely be covered within 'audio, visual, and audio-visual content'. As per the definition under the Draft Amendments, even AI-generated text alone would have the potential to be perceived as SGI. However, the Amendments have narrowed the scope to only audio, visual, and audio-visual content. In fact, the FAQs explicitly clarify that pure texts or written outputs by themselves would not be construed to be SGI.7 Including only audio, visual, and audio-visual content' within the scope of SGI, doubles down on the primary intent of MeitY behind these Amendments, which is to curb and control deepfakes.
That said, the Amendments read with the FAQs have clarified that while stand-alone text that spreads misinformation or text that accompanies an SGI, may in itself may not be SGI, it could still be treated as unlawful information based on the law it violates.
2.2. A Catch-All Definition
The definition of SGI is intended to include all artificially or algorithmically created, generated, modified or altered audio, visual or audio-visual content that appears real, authentic or true and is likely to be perceived as indistinguishable from a natural person or a real-world event.8 To this definition, certain exceptions such as routine editing, enhancing, formatting that does not result in false documents or false electronic records, and use of tools in a manner that do not materially alter or misrepresent the underlying content, have been prescribed.9 The exceptions to the definition of SGI would certainly reduce the burden on intermediaries to some extent - for the rest, adequate mechanisms by way of appropriate technical tools could be put in place by intermediaries to help them identify and flag content on the platform that fall outside the purview of SGI. This would aid intermediaries in determining what content on their platforms need not be checked for compliance with SGI-related provisions in the IT Rules.
While the carving out of exceptions is a welcome addition in the Amendments as opposed to the Draft Amendments, wherein the scope of SGI was broader, the definition continues to be catch-all and seemingly excessive given the primary intent of the Amendments being curbing deepfakes. The way in which SGI has been defined currently may not be proportional to the intent of MeitY which essentially is to curb the harms associated with use of SGI. For meaningful achievement of this intent, rather than having a blanket definition, based on which obligations are as-is applicable irrespective of the nature of the SGI, the ideal approach would have been for there to be some form of categorisation within the definition. Possible factors that could have been considered for the purpose of such categorisation include the intent to cause harm, and the probable extent of the harm. While this itself can be very subjective, clear metrics could have been assigned within such categorisation based on which one could be enabled to appropriately determine the class of SGI the content in question falls under and accordingly have corresponding compliance requirements prescribed in law.
2.3. Metrics for Authenticity
Given that the standard for determining whether content is SGI or not, is to decipher whether it appears to be authentic or true or not, it may require intermediaries to deploy appropriate tools, that can make such determination. That said, making such determination in itself can be subjective, given that there are no prescribed metrics based on which what can be construed to be authentic and what cannot be, can be decided. One aspect that can be considered when determining authenticity is by evaluating whether the SGI is of a nature where it is misrepresenting status quo or not. For instance, if a photograph of a celebrity is just enhanced slightly to adjust lighting, it could be construed to be within the exception. However, if the same photograph is edited using automated tools in a manner that the celebrity's physical features are significantly enhanced, in a way where while the celebrity is still recognizable, the enhanced features are extremely misleading- such image could be perceived to be SGI.
2.4. Applicability to Only Intermediaries
Given that the IT Rules apply to and are relevant for intermediaries,10 the intent behind the Amendments is to prescribe measures that intermediaries involved in enabling or facilitating creation, modifications or dissemination of SGI must take. By way of an example, if a social media platform provides tools for creation or modification of SGI, such as AI tools using which an image of an individual standing in front of the Eiffel Tower can be generated, or even enables or facilitates the sharing of such image (which the user may have created elsewhere, using tools independent of the social media platform), would be covered by the SGI-specific requirements under the IT Rules, which such platform would have to ensure compliance with.
It is interesting to note that the scope of activities of intermediaries covered under the Amendments are broader than what was prescribed in the Draft Amendments, which only covered those intermediaries involved in creation, generation or alteration of SGI, as opposed to the Amendments which also cover intermediaries involved in sharing or dissemination of SGI. The expansion of the scope in re intermediaries, clearly conveys the intent of the MeitY is to govern and regulate all intermediaries who may, in any way be associated with any SGI that is accessible through their platform. Additionally, the Amendments also appear to be aimed at factoring in the wide dissemination that is common on social media platforms and other digital platforms which makes it difficult to trace and hold accountable the originating source intermediary.
IMPACT ON INTERMEDIARIES
3.1. New Due Diligence Obligations for SGI
The Amendments to the IT Rules impose a proactive and continuing technical compliance obligation on any intermediary that offers a computer resource which may enable creation or dissemination of SGI.11 Such intermediaries are required to deploy reasonable and appropriate technical measures, including 'automated tools or other suitable mechanisms' to ensure that users are not permitted to create or transmit SGI that violates any law in force.12 The obligation appears to be an extension and codification of advisories dated December 26, 2023, and March 15, 2024 issued by MeitY to intermediaries ("MeitY Advisories").13
The rule identifies 4 (four) express categories of high-risk unlawful SGI against which intermediaries must specifically act against:
- Sexually explicit and obscene material: child sexual exploitative and abuse material ("CSEAM"), non-consensual intimate imagery ("NCII"), or content that is obscene, pornographic, paedophilic, invasive of another's bodily privacy, vulgar, indecent, or sexually explicit.
- False documents and electronic records: false documents or electronic records. Electronic records as defined under the Information Technology Act, 2000 ("IT Act")14 may include any information or data that is digital in nature and hence is a sweeping reference.
- Arms and explosives content: SGI relating to preparation, development, or procurement of explosive material, arms, or ammunition. Notably, abetment of offences committed under the Explosives Act, 188415 and the Explosive Substances Act, 190816 are also punishable under these legislations. Creating or disseminating such SGI could also be deemed abetment in this context.
- False depiction of real persons and events: SGI that falsely portrays the attributes of a natural person or the occurrence of a real-world event in a manner likely to deceive. Most deepfakes with real-word resemblance to persons or events would be covered under this restriction if there is an intent of deception.
Noting that the above categories are not exhaustive, the due diligence obligation also extends to any SGI that is unlawful. For instance, a deepfake of an imaginary economist (bearing no resemblance to an actual person) defrauding an individual to make questionable investments may still be required to be flagged and taken down by an intermediary.
The Amendments also impose a disclosure obligation on the intermediary to inform their users that using the intermediary's platform to create or transmit SGI falling within the above prohibited categories may attract criminal liability under a range of penal statutes.17 Users must further be informed that such violations may result in immediate content removal, suspension or termination of their account, disclosure of their identity to the victim or a person acting on the victim's behalf, and mandatory reporting to the appropriate law enforcement authority where the violation constitutes a mandatorily reportable offence under applicable law.
The obligation to deploy technical measures including automated tools represents a material departure from the pre-Amendments framework, where due diligence obligations were largely oriented around disclosure in terms of use, post-publication notice and takedown. The Amendments shift the compliance obligations upstream to the architecture of the platform itself.
The category of platforms now subject to this due diligence obligation is fundamentally widened too. The most impacted intermediaries may be those whose core product functionality depends on SGI generation and sharing, such as intermediaries offering generative AI image and video tools, voice cloning tools, etc.; AI assistants that are native to instant messaging platforms, etc. However, various other categories of intermediaries are reigned in too, such as - social media platforms and discussion forums which facilitate transmission of SGI even with no native integrated AI tool; cloud storage and hosting service providers; enterprise collaboration tools such as Microsoft Teams and Slack; video conferencing tools with AI-enhanced filters or virtual backgrounds; e-commerce platforms that host AI-generated product imagery, listings, or user reviews; and even gaming platforms and dating applications in relation to AI-generated avatars, artificial game scenarios or in-game voice modulations.
3.2. Safety-by-Design Architecture
Intermediaries are now shifting their approach on SGI enforcement from a passive compliance of having restricted content policies, terms of use and notice-and-takedown mechanisms, to now being required to implement a safety-by-design framework prior to and independent of any user complaints or government or court intimations.
Technical measures including automated tools should be implemented throughout the full user journey of an intermediary's product or service as follows.
- Input layer measures: Intermediaries may opt to undertake input and prompt filtering at the time of content creation or upload to screen or filter user instructions against a curated list of prohibited content categories and triggers. This list may be aligned to a risk classification mechanism with varying degrees of risk. The 4 (four) express categories of infringing content mentioned above in para 3.1 may be flagged as the highest risk category and immediately reviewed. The list may be updated periodically to account for evolving real world considerations and guided by regulatory enforcement actions by MeitY. Model weights underlying generative tools must be fine-tuned to inherently resist prompt injections and manipulation strategies. Typically, such strategies employed against generative tools are linked to the rise in creation and transmission of unlawful SGI.
- Output layer measures: Output monitoring i.e., monitoring of SGI to be published or already published may be undertaken by intermediaries by scrubbing such content against prohibited categories, especially high-risk categories. Social media platforms that thrive on engagement and virality may also need to relook at virality algorithms. Since virality is also flagged by MeitY as a core risk,18 social media platforms may specifically discourage virality of SGI that tend to be unlawful. This would also include identifying SGI that are identical or similar to SGI that is already flagged and taken down previously by the intermediary.
- Measures beyond automated tools: Additionally, repeated attempts by users to create or transmit content that is on the unlawful SGI spectrum should be logged and escalated internally. Such attempts can also trigger actions such as user account suspension and termination. Given the imperfect accuracy of any automated tools, intermediaries should implement human-in-the-loop review mechanisms particularly for borderline cases and cases escalating toward account suspension or identity disclosure. This is also consistent with globally recognised principles of meaningful human oversight encapsulated in the European Union (EU) Artificial Intelligence Act19 and the Organisation for Economic Co-operation and Development (OECD) AI principles.20 Intermediaries may require users to declare that content does not constitute unlawful SGI for good measure. Regular audits of the technical architecture implementing these safeguards may be conducted too.
However, many of these changes at a design architecture level may not always be possible to be implemented by the intermediary. This may be because the architecture of the SGI generation tools themselves may not be native to or in the control of the intermediary platform . In such cases, intermediaries may ensure adequate contractual obligations on vendors providing SGI tools to implement reasonable and appropriate technical measures outside the intermediary's platform architecture and obtain sufficient indemnities.
Intermediaries that rely on a bare revision of terms of service, content removal only upon receiving takedown orders, or blocking based on mere keywords may fall short of the 'reasonable and appropriate technical measures' standard imposed by the Amendments and elaborated in the FAQs.
The Amendments also provide that removing or disabling access to unlawful SGI proactively by intermediaries deploying reasonable and appropriate technical measures would still preserve their safe harbour under the IT Act.21 That being said, making an intermediary platform the arbiter of whether any content is lawful or unlawful becomes a delicate balancing act between being pro-active monitoring through technological means and not restricting lawful content or curbing the freedom of speech and expression of individuals. Failure to effectively navigate this balance could result in enabling unlawful activities or chilling of free speech.
3.3. Labelling and Provenance of SGI
The Amendments introduce a dual obligation of labelling and provenance of SGI on the intermediary with respect to lawful SGI.22 The first limb requires that all lawful SGI be labelled in a prominent manner either in visual or auditory medium enabling immediate identification of the content as SGI. The second limb requires that such SGI be embedded with permanent metadata, or other appropriate technical provenance mechanisms, to the extent technically feasible, including a unique identifier to identify source tool used to generate the SGI. It appears from a plain reading of the relevant provision that if permanent metadata is not possible to be embedded, other possible provenance mechanisms should be explored.
The prominent labelling requirement seems to dictate that the label should be practically discoverable at the moment of consumption of SGI content. In line with clarifications provided in the FAQs23, a visible watermark as a visual part of an image, a badge or icon juxtaposed on visual content, or an unambiguous prefixed audio announcement in the case of audio content identifying it as SGI should satisfy this standard. Any design mechanics that bury such labelling behind a secondary interface layer such as a 'more info' tab, a collapsed menu, or a description field accessible only after user interaction may not satisfy the prominent visibility standard. A similar requirement can also be seen in developing law in the European Union ("EU") wherein the European Commission AI Office's commissioned first draft Code of Practice on Transparency of AI-Generated Content proposed under the EU AI Act24, suggests a common EU-wide icon which is visible 'at the time of the first exposure'.25
The provenance requirement prescribes embedding metadata or any other mechanism that is permanent in nature. Further, the Amendments prohibit an intermediary from enabling tampering or removal of such metadata.26 While it may be possible for a user to independently employ various methods to strip the metadata of any SGI content, the restriction appears to be on the intermediary to not facilitate or provide any options to the user to do so. Some mechanisms to maintain permanence of metadata, including cryptographical signatures affixed to SGI content or structurally embedding metadata in the file type itself, may be explored as possible safeguards.
Currently, one of technical standards that enables labelling and embedding of cryptographically signed provenance metadata on SGI is the Coalition for Content Provenance and Authenticity ("C2PA") standard, co-founded by Adobe, Microsoft, Truepic, Arm, Intel and the BBC.27 Seen as a label known as 'Content Credentials' or 'cr' on SGI, it can include details about the creation of SGI, modifications to the SGI over time, the current status of the SGI, and can be independently verified.28 The C2PA standard is currently adopted by OpenAI's ChatGPT for image generation29, Adobe for content curated by its offerings30 and Microsoft's Azure OpenAI image generation tool.31 While it may not be suitable for prominent labelling, Google DeepMind has introduced an invisible digital watermarking tool called SynthID for all modes of SGI that is images, video and audio, with limited verification mechanisms.32
However, these technical standards are not foolproof too. The Amendments, as a further safeguard prescribe the requirement for an intermediary to take 'expeditious and appropriate action' if it becomes aware of any SGI that is not appropriately labelled or does not have embedded provenance information either on its own accord or through a user complaint or court or government order. Such expeditious and appropriate action may include immediate content removal, suspension or termination of their account, disclosure of the identity of the infringing user to the victim or a person acting on the victim's behalf in case of harmful content, and mandatory reporting to the appropriate law enforcement authority where the violation constitutes a mandatorily reportable offence under applicable law.33
IMPACT ON SSMIS
In addition to and independent of all the due diligence obligations on intermediaries analysed above, Significant Social Media Intermediaries ("SSMIs")34 have heightened sequential obligations that shall apply prior to display, upload or publication of any information on their platform.35 The Amendments now mean that SSMIs must first require users to declare, as a precondition to posting content, whether the information constitutes SGI or not.36 Second, once such declarations are made, SSMIs are required to deploy appropriate technical measures, including automated tools or other suitable mechanisms, to verify the accuracy of such declaration, having regard to the nature, format, and source of the information. The explanation to this rule states that SSMIs must verify the correctness of user declarations and to ensure that no SGI is published without such declaration or label. The third obligation completing the sequence is that where it is confirmed that content is SGI, the SSMI must ensure it is clearly and prominently displayed with an appropriate label or notice. As elaborated in the preceding section, the labelling of SGI should be prominently visible and perceivable. Failure to meet these obligations by the SSMI may be deemed as a failure to exercise due diligence and the SSMI may not be able to avail safe harbour protection.
4.1. Challenges to Verify SGI Declarations
Active verification of user declarations that the content being published is SGI remains a sturdy challenge. SSMIs are pressed to adjudicate, even if by technical means, whether any content constitutes SGI. This exercise may especially be tricky in borderline cases where the determination is neither straightforward nor consistent across automated systems. For example, use of filters by influencers for entertainment purposes may either constitute SGI or may just amount to AI-assisted enhancement of photos or videos. Such cases are highly context dependant, requiring the SSMI to make a determination This obligation springs the same balancing act predicament put forward under the previous section at para 3.2 above. That being said, the Amendments do soften the burden by requiring only reasonable and proportionate technical measures, acknowledging that no verification mechanism may be perfect.
While it may be difficult to verify and affirm negative SGI declarations by users even through common detection methods (perceptual and contextual cues, unnatural anomalies, incoherent elements in the content, etc.) SSMIs can engage third party services offering deepfake detection services at an enterprise level such as Intel's FakeCatcher37 or Reality Defender38
4.2. Declare - Yes. No. Maybe?
Given the inherent difficulty of binary SGI classification and the proliferation of realistic SGI over the internet that the user themselves may not be sure of, SSMIs may also explore allowing their users more than 2 (two) choices during SGI declaration. The user declaration interface may instead provide for a 3 (three) option system i.e., to declare either: (a) "Yes, this is AI-generated content"; or (b) "No, this is not AI-generated content"; or (c) "I am not sure". While SSMIs are obligated to verify any declaration by the user, the degree of scrutiny to verify the declaration may vary based on whether the user declares 'Yes', 'No' or 'Maybe'. For instance, a 'Maybe' declaration may be reviewed with slightly higher scrutiny than a 'Yes' declaration. This third option may avoid the risk of permitting publication first and then conducting post publication review. It may also avoid the risk of not identifying actual SGI which a user may have inadvertently declared as not SGI.
4.3. File Type Restrictions
As a structural risk-mitigation measure, SSMIs can also restrict content uploads to file formats that natively support embedded metadata (such as Content Credentials) by default. While this may not be the most practical approach for large SSMIs with petabytes of content being managed daily, it may be appropriate for image-heavy or visual content platforms such as Canva.
4.4. From Endeavouring to Ensuring Technical Measures
The IT Rules prior to the Amendments required SSMIs to 'endeavour to deploy technology-based measures, including automated tools or other mechanisms' to proactively identify content depicting rape, child sexual abuse, or any content identical to previously taken down content. The Amendments as further detailed in the FAQs now require SSMIs to move to 'a clearer, mandatory and proportionate obligation' to deploy such measures with no recommendatory language.39 This obligation on the SSMIs is in relation to removal of any repetitive sexually explicit content or any other repetitively occurring content that had been previously taken down. However, the standard of deploying technology-based measures, including automated tools, may be taken at similar stead to the approach taken by non-SSMI intermediaries when implementing appropriate technical measures (discussed above in para 3.2). Unlike the obligation on non-SSMI intermediaries, the obligation on SSMIs to implement appropriate measures is absolute and without any reasonableness qualifier on such measures.
CHANGES IN APPLICABLE TIMELINES
In addition to the specific changes aimed at regulating SGI on intermediaries' platforms, MeitY has overhauled several compliance timelines prescribed in the IT Rules. These revisions are generally applicable to all intermediaries, irrespective of whether they are involved with any SGI or not.
At the get go, there is a significant change in the timeline that must be adhered to by intermediaries in cases of specific content takedown or access disablement direction (such as in cases of defamatory content, interest of sovereignty and integrity of India, any unlawful content etc.). From 36 (thirty-six) hours, this timeline, by which such directions have to be actioned by the relevant intermediaries, has been brought down to 3 (three) hours from receipt of actual knowledge (as prescribed in the IT Rules) in relation to the violating content.40 In cases of grievances that pertain to a request for removal of content or communication link, such as for content harmful to children or obscene content, intermediaries are now required to resolve such grievances within 36 (thirty six) hours of reporting as opposed to the earlier 72 (seventy-two) hours timeline.41 In relation to complaints pertaining to removal of or disabling of specified categories of content which prima facie depicts, inter alia, nudity, sexual content, or is in the nature of artificially morphed or impersonation content, measures to remove such content must be taken within 2 (two) hours from receipt of complaint, as opposed to the earlier timeline of 24 (twenty-four) hours.42 The general grievance redressal timeline prescribed under the IT Rules has also been reduced, from 15 (fifteen) days to 7 (seven) days from receipt of grievance.43
The reduced timelines, for which the clock starts ticking upon receipt of information, will require a significant overhaul of internal processes by intermediaries. While big techs may already have the resources and capabilities to revamp their existing response and action tools and mechanism, and make them more robust, the smaller players in the industry may find it difficult to action the compliance requirements within the tight enforcement timeline.
It is also relevant to note that the requirement of informing users on a yearly basis about the intermediary's policies, has now been revised to a periodicity of 3 (three) months.44 Intermediaries can practically implement this by displaying the relevant policies when their platform is accessed, or by displaying relevant pop-ups on the intermediaries' platform with the required information, and / or by sending emails with the required information, every 3 (three) months. The requirement under the rule is to ensure that the users are informed of such policies, and not to take consent to such policies every 3 (three) months. Accordingly, as long as intermediaries are able to demonstrate that they have informed the users in some way or the other, their responsibility under this compliance requirement can be construed to be discharged.
WHAT'S NEXT?
The Amendments signify not just additional layers of compliance for intermediaries but set the ground for a fundamental revamping of how their processes must operate. The new framework intends to tackle every stage of content generation, modification and dissemination, which could have any intermediary involvement. Intermediaries would now be required to contemplate structural redesigning of all aspects of their operations, including deployment of robust and prompt technical measures, and revisiting their policies and practices to ensure greater accountability for user behaviour. The revisions in the IT Rules and the consequences thereof are indicative of the approach being taken in the Indian regulatory landscape for governance of AI, with the primary focus being on ensuring responsible use of AI, specifically being driven by intermediary platforms.
Footnotes
1. The Amendments are available https://www.meity.gov.in/static/uploads/2026/02/f55fe52418b03f58b0669f6a8bc03b6d.pdf
2. The Draft Amendments are available https://www.meity.gov.in/static/uploads/2025/10/9de47fb06522b9e40a61e4731bc7de51.pdf
3. The FAQs are available https://www.meity.gov.in/static/uploads/2025/10/065b6deb585441b5ccdf8be42502a49c.pdf
4. Advisory No. 2(4)/2023-CyberLaws–2 on 'Due diligence by Intermediaries and Grievance Reporting Mechanism, under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021' issued by MeitY on December 26, 2023 and Advisory No. eNo.2(4)/2023-CyberLaws-3 on 'Due diligence by Intermediaries / Platforms under the Information Technology Act, 2000 and Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021' issued by MeitY on March 15, 2024.
5. Explanatory note to the Draft Amendments is available https://www.meity.gov.in/static/uploads/2025/10/8e40cdd134cd92dd783a37556428c370.pdf
6. Rule 2(1)(ca) read with Rule 2(1)(wa) of the IT Rules.
7. Question 8 of the FAQs.
8. Rule 2(1)(wa) of the IT Rules.
9. Rule 2(1)(wa) of the IT Rules.
10. Under the Information Technology Act, Section 2(1)(w) defines 'intermediary' with respect to any particular electronic records, to mean 'any person who on behalf of another person receives, stores or transmits that record or provides any service with respect to that record'.
11. Rule 3(3)(a)(i) of the IT Rules.
12. Rule 3(3)(a)(i) of the IT Rules
13. Advisory No. 2(4)/2023-CyberLaws–2 on 'Due diligence by Intermediaries and Grievance Reporting Mechanism, under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021' issued by MeitY on December 26, 2023 and Advisory No. eNo.2(4)/2023-CyberLaws-3 on 'Due diligence by Intermediaries / Platforms under the Information Technology Act, 2000 and Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021' issued by MeitY on March 15, 2024.
14. Section 2(1)(t) of the IT Act.
15. Section 12 of the Explosives Act, 1884.
16. Section 6 of the Explosive Substances Act, 1908.
17. Rule 3(1)(ca) of the IT Rules.
18. Azdhaan, 'MeitY Scientist Flags 'Virality' as Core Risk in Deepfakes Regulation', dated February 18, 2026 available https://www.medianama.com/2026/02/223-india-deepfake-regulation-virality-control/
19. Article 14 of the EU AI Act, available https://artificialintelligenceact.eu/article/14/
20. Principle 1.2(b) of the OECD AI Principles, available https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
21. Rule 2(1B) of the IT Rules.
22. Rule 3(3)(a)(ii) of the IT Rules.
23. Question 22 of the FAQs.
24. European Commission's 'First Draft Code of Practice on Transparency of AI-Generated Content' available https://digital-strategy.ec.europa.eu/en/library/first-draft-code-practice-transparency-ai-generated-content
25. Measure 1.2 of European Commission's 'First Draft Code of Practice on Transparency of AI-Generated Content' available https://digital-strategy.ec.europa.eu/en/library/first-draft-code-practice-transparency-ai-generated-content
26. Rule 3(3)(b) of the IT Rules.
27. Andy Parsons, 'Adobe co-founds the Coalition for Content Provenance and Authenticity (C2PA) standards organization' dated February 22, 2021, available https://blog.adobe.com/en/publish/2021/02/22/adobe-continues-content-authenticity-commitment-founder-c2pa-standards-org
28. C2PA FAQs, available https://c2pa.org/wp-content/uploads/sites/33/2025/10/content_credentials_wp_0925.pdf
29. Open AI, 'C2PA in ChatGPT Images' available https://help.openai.com/en/articles/8912793-c2pa-in-chatgpt-images
30. Adobe Creative Cloud, ' Content Credentials overview', available https://helpx.adobe.com/in/creative-cloud/apps/adobe-content-authenticity/content-credentials/overview.html
31. Azure, 'Content Credentials', available https://learn.microsoft.com/en-us/azure/ai-foundry/openai/concepts/content-credentials?view=foundry-classic
32. Google DeepMind, 'A tool to watermark and identify content generated through AI', available https://deepmind.google/models/synthid/
33. Rule 3(1)(cb) read with rule 3(1)(ca)(ii) of the IT Rules.
34. SSMIs are those intermediaries who primarily or solely enable online interaction between 2 (two) or more users and allow them to create, upload, share, disseminate, modify or access information using their services, and have 50 (fifty) lakh registered users in India.
35. Rule 4(1A) of the IT Rules
36. Rule 4(1A) of the IT Rules.
37. Intel, 'Trusted Media: Real-time FakeCatcher for Deepfake Detection', available https://www.intel.com/content/www/us/en/research/trusted-media-deepfake-detection.html
38. Reality Defender, 'Enterprise-Grade Detection in Real Time, available https://www.realitydefender.com/platform/technology
39. Rule 4(1A) of the IT Rules read with Question 27 of the FAQs
40. Rule 3(1)(d) of the IT Rules; As regards this obligation, the requirements for actual knowledge and what constitutes as actual knowledge has also been modified slightly- where the directions are issued by appropriate government authority, the direction must be in writing, and where directions issued by police administration, the specific designation of police offers who can issue such orders, has been expanded.
41. Rule 3(2)(a)(i) of the IT Rules.
42. Rule 3(2)(b) of the IT Rules.
43. Rule 3(2)(a)(i) of the IT Rules.
44. Rule 3(1)(c) of the IT Rules.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.