- within Antitrust/Competition Law topic(s)
- with Senior Company Executives and HR
- in Canada
- with readers working within the Oil & Gas and Law Firm industries
On October 6, 2025, the Competition Commission of India ("CCI") released a market study report on Artificial Intelligence and Competition ("Report"). The Report represents the Indian antitrust regulator's first structured and comprehensive effort to study how AI interacts with markets, innovation, and competition law. The Report explores the canvass of the AI industry and identifies potential competition law concerns basis primary surveys, stakeholder consultations and global best practices. It proposes policy, regulatory, and institutional responses to ensure a fair and innovative AI ecosystem.
The Report understands that AI markets are a layered ecosystem or ('AI stack'), comprising of upstream inputs, computational infrastructure, AI models, downstream applications, and interfaces with end users. Control over data, compute, and models by upstream players can create structural barriers, while downstream applications translate these advantages into market power through self-preferencing, exclusionary practices, or tacit coordination. Vertically integrated firms operating across multiple layers amplify competition risks. By conceptualising AI in terms of this stack, the Report underscores the need for a holistic, ecosystem-level approach to monitoring and enforcement.
The main aspects of the study are as under:
- Identification of distinct upstream and downstream layers while also drawing attention to the growing prevalence of vertically integrated players operating across both levels.
- At the upstream level, entities that supply critical inputs necessary for the development and deployment of AI systems are identified. These include firms that control large and diverse datasets, providers of computing infrastructure and cloud services, and developers of proprietary AI models and algorithms. The Report highlights data as a particularly significant competitive asset, noting that access to large volumes of high-quality data may create self-reinforcing advantages and raise barriers to entry. From a competition perspective, the Report flags the risk that control over such inputs may enable upstream players to engage in exclusionary conduct, including preferential access, discriminatory licensing terms, or refusal to deal.
- Downstream markets comprise of entities that deploy AI systems, that offer goods or services to consumers or businesses. The Report identifies digital platforms such as search engines, e-commerce marketplaces, social media platforms, and online advertising intermediaries, as prominent downstream users of AI. In addition, sector-specific application providers in areas such as finance, healthcare, logistics, and advertising, increasingly rely on AI to compete on efficiency, accuracy, and personalisation.
A key concern that emerges from the Report is the role of vertically integrated entities which operate simultaneously both in upstream and downstream markets. Such entities may combine control over data, computing resources, and AI models with the provision of consumer-facing services. This raises complex enforcement challenges under the Competition Act, 2002 ('Act'), particularly in relation to attribution of conduct and evidentiary standards.
One of the key focus points in the Report is 'understanding AI Stack'. The layered approach highlights the various competition concerns, the Report largely treats these layers, independently. When in reality, AI markets are majorly vertically integrated, with large entities controlling multiple layers of the stack. While the Report acknowledges this concentration, it does not fully examine how dominance in one layer could be leveraged to exclude competitors in another. As a result, the AI stack remains as a descriptive tool rather than a framework for enforcement.
Regarding merger control, the Report does not question whether traditional merger analysis is adequate for AI markets. This is notable given India's recent move towards deal-value thresholds. The absence of a detailed discussion on network effects or data-driven conglomerate power, or serial acquisitions of AI startups, reflects an incomplete approach to enforcement.
Though the Report does not advocate regulatory intervention at this stage for AI-driven markets, it adopts a calibrated approach that combines existing competition law enforcement with soft regulatory tools. Traditional enforcement may be limited in its effectiveness to combat fast-developing, opaque and concentrated AI markets.
Against this backdrop, the Report views self-regulation as a useful but insufficient mechanism. Voluntary measures such as internal governance frameworks, ethical AI principles, transparency commitments, and industry best practices may help mitigate risks in the early stages of AI adoption. Self-regulation rests on the assumption that the incentives of dominant market players are aligned with competitive outcomes. In AI-driven markets marked by strong network effects, data concentration, and economies of scale, incumbents often have commercial incentives to entrench their market position rather than preserve contestability. As a result, voluntary commitments relating to data sharing, interoperability, or non-discriminatory access may be selectively implemented, strategically diluted, or framed in ways that minimise their competitive impact.
However, self-regulation frameworks suffer from inherent asymmetries of information and enforcement. AI systems are complex, opaque, and often protected as proprietary assets, limiting the ability of external stakeholders let alone competitors or consumers to verify compliance with self-imposed standards.
- In the absence of binding disclosure obligations or independent audits, self-regulation risks becoming a signalling mechanism rather than a substantive constraint on anti-competitive conduct.
- Addressing these structural risks requires sector-wide visibility, regulatory oversight, and coordinated monitoring capabilities that voluntary frameworks are inherently unable to deliver.
- Self-regulation alone may be inadequate where strong economic incentives exist to foreclose rivals, entrench market power, or exploit informational asymmetries, particularly in markets characterised by data concentration and vertical integration.
- Self-regulation in AI-driven markets may, in practice, compel entities to disclose proprietary algorithms and datasets to regulatory authorities. This may raise concerns for the entities about protection of trade secrets and confidential information.
Effects-based analysis under Sections 3 and 4, are, in principle, capable of addressing most AI-related competition concerns, including exclusionary conduct, leveraging, and coordinated outcomes. The CCI can tackle both horizontal and vertical anti-competitive conduct under the Act, by applying established legal principles to AI markets. Section 4, which prohibits abuse of dominance, is particularly relevant in cases where firms control critical inputs such as datasets, proprietary AI models, or high-performance computing infrastructure. Vertical integration in AI ecosystems may enable dominant players to engage in self-preferencing, foreclose rivals, or leverage upstream advantages into downstream markets all conduct that falls squarely within the scope of Section 4.
Similarly, Sections 3(3) and 3(4) may have to be modified to address anti-competitive agreements involving AI especially algorithmic coordination wherein for instance coordination on pricing may occur with any human intervention, knowledge or without any communication. Such instances may not even fall within the purview Section 3 as it is today and appear to have never breached anti-trust laws.
Through market analyses, stakeholder consultations, and policy guidance, the Commission can proactively flag emerging risks, encourage transparency, and nudge firms toward responsible AI practices long before actual harm materialises. Such forward-looking measures complement traditional enforcement and allow for nuanced interventions that balance competition with innovation. Sectoral collaboration is another key tool. By working alongside regulators such as the RBI, TRAI, IRDAI, and DPB, the CCI can monitor AI deployment across sensitive sectors, ensuring that competition concerns intersect with data protection, consumer welfare, and cybersecurity considerations.
In conclusion, India's existing framework provides a flexible, effects-based approach to regulating AI-related competition concerns. By combining enforcement under Sections 3 and 4 of the Competition Act, 2002 with market studies, advocacy, and sectoral coordination, the CCI can address risks such as exclusionary leveraging, algorithmic collusion, and vertical integration abuses without stifling innovation. Additionally, regulatory efforts should parallelly focus on development of detection tools which would alter instances of coordination in AI driven markets. This would include the regulatory authorities to adopt and deploy algorithms that would automatically monitor the players in the market to analyse price changes among others in the market.
Another major concern while making an attempt to regulate such markets would be that indirectly, the regulating authorities' powers would be extended while investigating entities/players in AI-driven market. This would mean that authorities would have access to documentation and training and testing datasets and algorithms which is used in developing AI systems to investigation whether the entity is in violation of the provisions of the Competition Act which would also simultaneously raise privacy concerns. In view of this, the biggest challenge that lies ahead of us would be tackling the overlap of multiple regulations in the near future.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.