The copyright infringement lawsuit filed by ANI Media against OpenAI in the Delhi High Court last November reignited discussions around the regulatory vacuum surrounding Generative AI in India. Although artificial intelligence (AI) as a concept has been under development for more than seven decades, the rapid mainstream adoption of generative tools like ChatGPT has underscored the urgent need for clear, enforceable regulatory framework. Particularly, it has forced long-dormant concerns surrounding the legal status of AI-generated content, the integrity of training data, and intellectual property (IP) compliance into the spotlight.
For Standard Essential Patent (SEP) holders, licensors and licensees, technology developers, and legal practitioners, this litigation is a timely reminder to understand and anticipate India's regulatory stance on AI. While generative AI has accelerated innovation across sectors, it also introduces complex questions around ownership, licensing, infringement, and data governance—areas that lack a clear legal roadmap in India.
This sense of urgency is not unique to India. Across the globe, governments are scrambling to keep pace with the exponential growth of AI technologies, particularly those that generate content, make decisions, or automate human functions. While there is broad agreement on the need for regulation to address risks such as misinformation, bias, surveillance abuse, and IP violations, the strategies and philosophies underlying regulatory responses differ widely by country.
India's Evolving AI Regulatory Philosophy
At the 2024 Global India AI Summit in New Delhi, the Ministry of Electronics and Information Technology (MeitY) stressed the importance of AI in achieving "Viksit Bharat" by 2047. India aims to be a global hub for AI innovation and development and is looking forward to a regulatory environment which is both pro-innovation as well as one that ensures security and privacy of data.
The Government has so far adopted a "light-touch" regulatory approach that emphasizes innovation, indigenous development and ethical use without stifling progress. This is evident in initiatives like:
- IndiaAI Safety Institute (2025) to establish AI safety standards in collaboration with academic institutions and industry partners.
- IndiaAI (2020) laid the groundwork for an ecosystem that promotes indigenous AI development, access to high-quality datasets, industry-academia collaboration, and ethical AI practice.
- National Artificial Intelligence Resource Portal (NAIRP) (2019): Proposed as a central repository for AI/ML resources and research.
- National Strategy for Artificial Intelligence (2018) released by the NITI Aayog followed by the Principles for Responsible AI (2021) that align with international guidelines by prioritising transparency, accountability, and fairness.
In March 2024, MeitY issued an advisory requiring intermediaries and platforms to label under-tested or "unreliable" AI models to warn users of potential inaccuracies and to obtain prior government approval before deployment. However, the requirement for prior approval was later dropped after apprehensions were shared by startups and the tech industry (though social media intermediaries were required to use consent pop-ups to warn users about unreliable AI content, while measures to detect and label deepfakes and misinformation remained in place).
The development raised concerns about regulating AI as aspects such as inputs (like training data, including copyrighted material), outputs (AI-generated decisions or content), and processes (like model architecture and algorithms) constitute a tripartite structure vital for sectors involving FRAND licensing, software patents, automated R&D, and algorithmic innovation.
A notable complementary step toward fostering innovation in the AI and software space is the release of the Draft Guidelines for Examination of Computer-Related Inventions (CRI) 2.0 by the Indian Patent Office in 2025. The proposed guidelines signal a shift toward a more balanced and innovation-friendly interpretation of patentability in computer-related and AI-driven inventions. Unlike earlier versions that were criticized for their restrictive stance, Draft CRI 2.0 emphasizes a technology-centric approach rather than a narrow software exclusion. It encourages examiners to assess patent applications based on technical contribution, problem-solution approaches, and real-world utility, which aligns well with international best practices. For AI developers, R&D institutions, and SEP stakeholders, this development underscores the government's intent to create a supportive IP environment that aligns with India's ambitions to be a global AI leader.
Global AI Regulatory Models: A Comparative Overview
i. European Union: The Gold Standard
The EU AI Act (AIA), formally enacted in 2024, is currently the most comprehensive legal framework for AI regulation worldwide. It categorizes AI applications based on risk:
- Prohibited AI: Systems involving social scoring or manipulation.
- High-risk AI: Used in sectors like healthcare, law enforcement, or critical infrastructure. These face strict compliance obligations including transparency, documentation, human oversight, and risk management.
- General-purpose AI: Developers must disclose training data, energy consumption, and testing results.
- Minimal-risk AI: Encouraged with minimal oversight.
The Act places most regulatory responsibilities on AI providers and vendors, ensuring that obligations are enforced upstream in the value chain.
ii. United States: A Sectoral and Permissive Approach
Unlike the EU, the United States lacks a federal AI law. Instead, it relies on executive orders, agency guidance, and sector-specific regulations. While President Biden's Executive Order called for secure and trustworthy AI development, the overall U.S. strategy remains market-driven and innovation-focused in view President Trump's permissive approach.
Various bills are under consideration in Congress, covering AI safety, data governance, and accountability, but none have passed into law yet. Agencies like the FTC (Federal Trade Commission) and NIST (National Institute of Standards and Technology) play advisory roles, but enforcement remains scattered and fragmented.
iii. China: Control Through Comprehensive Oversight
China has been more aggressive in regulating generative AI, viewing it as both an opportunity and a potential threat. In 2023, the Interim Measures for the Management of Generative Artificial Intelligence Services came into force. This multi-agency regulation mandates:
- Licensing of AI providers.
- Censorship and alignment with "core socialist values."
- Mandatory disclosure of AI-generated content.
China's regulation reflects its broader governance philosophy—centralized control and risk containment, particularly for politically sensitive or large-scale technologies.
iv. Australia: Voluntary and Sector-Led Regulation
Australia does not have AI-specific laws but relies on voluntary standards and existing legal frameworks like privacy, competition, and consumer protection laws. The Voluntary AI Safety Standard, developed with stakeholder consultation, guides ethical and responsible use but lacks enforcement teeth.
Sector-specific regulators like the Australian Competition and Consumer Commission (ACCC) and Office of the Australian Information Commissioner (OAIC) are expected to play an increasing role.
The Road Ahead for India
As global regulation tightens, India is at a crucial crossroads. With a digital economy projected to surpass $4 trillion, the stakes for getting AI regulation right are enormous. India's regulatory strategy must consider:
- Data protection: Aligning with the Digital Personal Data Protection Act (2023).
- Copyright and IP law: Especially relevant in light of the ANI-OpenAI litigation as well as threats posed by deepfakes.
- Employment and labour impact: With rising automation in services and manufacturing.
- Financial services and fintech: Ensuring fairness, transparency, and security.
India's future AI regulation must be modular, agile, and inclusive, incorporating feedback from academia, industry, civil society, and government. Most importantly, it must centre ethical principles, while enabling global competitiveness.
Conclusion
AI regulation today mirrors the political, cultural, and economic priorities of each nation. From the EU's rights-based approach to China's security-first stance, from the U.S.'s innovation permissiveness to Australia's sectoral guidance, there is no universal model yet.
As AI adoption surges across sectors like pharmaceuticals, telecom, automotive, and fintech, AI regulation is poised to become the next major domain of legal development in India—on par with data protection, competition law, and intellectual property law reforms.
For licensors, licensees, IP attorneys, and R&D leaders, the imperative now is to engage with policymakers, anticipate legislative trends, and proactively adapt compliance strategies. India may still be defining its regulatory roadmap, but one thing is clear: those who shape it today will dominate its legal and technological future tomorrow.
This article was first published by LexWitness Magazine (June-July 2025 issue)
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.