ARTICLE
28 January 2026

Ontario's IPC And OHRC Issue Principles For Responsible AI Use

TL
Torys LLP

Contributor

Torys LLP is a respected international business law firm with a reputation for quality, innovation and teamwork. Our experience, our collaborative practice style, and the insight and imagination we bring to our work have made us our clients' choice for their largest and most complex transactions as well as for general matters in which strategic advice is key.
As Canadian regulators work to align institutional AI use with key Canadian values, the Information and Privacy Commissioner of Ontario (IPC) and the Ontario Human Rights Commission (OHRC)...
Canada Technology
Irfan Kara’s articles from Torys LLP are most popular:
  • within Technology topic(s)
  • with Finance and Tax Executives and Inhouse Counsel
Torys LLP are most popular:
  • with readers working within the Banking & Credit, Healthcare and Law Firm industries

As Canadian regulators work to align institutional AI use with key Canadian values, the Information and Privacy Commissioner of Ontario (IPC) and the Ontario Human Rights Commission (OHRC) are the latest to communicate their expectations for organizations in their jointly-developed Principles for the Responsible Use of Artificial Intelligence (the Principles)1. While they do not necessarily place additional legal obligations on organizations, IPC and OHRC say the Principles will ground their assessment of whether organizations' adoption of AI systems is consistent with privacy and human rights obligations. Further, institutions are “strongly encouraged” to adopt the Principles to ensure compliance with Ontario's human rights and privacy laws.

What you need to know

  • Adherence to the Principles requires organizations to: (a) ensure that AI systems are valid and reliable, safe, privacy protective, human rights-affirming, and transparent; and (b) implement human accountability and oversight.
  • Notable expectations include:
    • Independent testing standards and evidence that AI systems meet intended use requirements.
    • Detailed, plain-language documentation of AI systems, including any assessments, data sources, intended purposes, uses and potential impacts of its outputs.
    • AI governance structures that assign clear accountability and ensure a human is in the loop.
  • These expectations are closely aligned with existing domestic and international guidance on responsible AI, and re-emphasize the need for organizations to implement and update AI governance and use policies.

Six guiding principles for responsible AI use

The IPC and OHRC indicate that the six following principles are of equal importance, and should be considered throughout the lifecycle of an AI system, from design and modelling to deployment and decommissioning.

  1. Validity and reliability. AI systems must produce valid, reliable and accurate outputs. “Validity” requires that AI systems meet independent testing standards, and that there is objective evidence that they are meeting all requirements for their intended use. “Reliability” requires that AI systems consistently perform as intended, in both environments they were intended to be used and in unexpected environments. AI systems should be tested for validity and reliability before being deployed and at regular intervals throughout their lifecycles.
  2. Safety. AI must be developed, used and governed in a manner that protects human rights, including the rights to privacy and non-discrimination. AI systems must incorporate robust cybersecurity safeguards, and should be regularly monitored to ensure they are not susceptible to misuse. A comprehensive safety assessment should be conducted each time an AI technology is used for a new purpose. Unsafe AI systems should be turned off or decommissioned, and any negative impacts to groups or individuals should be reviewed.
  3. Privacy protection. AI systems should be designed with privacy in mind. At the outset, developers, providers and users of AI must implement measures to protect and minimize the use of personal information. Anyone collecting, processing, retaining or using data to develop, train or operate an AI system must have clear, lawful authority to do so. Individuals should be informed when their information is used and, where appropriate, be able to access personal information used or generated by AI.
  4. Affirmation of human rights. Protections for human rights must be incorporated into the design of AI systems and procedures. Institutions using AI must mitigate discrimination and ensure compliance with the Canadian Charter of Rights and Freedoms.
  5. Transparency. Transparency requires organizations to explain how AI systems are being used and how they work. This requires plain-language documentation of AI systems throughout their lifecycles, including (a) any privacy or algorithmic impact assessments; (b) the sources of any personal data used to train or operate systems; (c) their intended purposes; (d) how they are used; and (e) how their outputs could impact individuals or communities. It also requires organizations to notify individuals when they are interacting with an AI system, or when information provided to them is AI-generated. Organizations must be able to collect key information about the system, including information about the model and its intended use, training and validation data, and monitoring measures.
  6. Accountability. Organizations should implement an internal governance structure that clearly defines roles, responsibilities, and oversight procedures, and ensure that there is a human in the loop. Institutions should also document their decisions about design and application choices and be prepared to explain how an AI system works to an independent oversight body upon request. Whistleblowing protections should be in place to ensure the safe and timely reporting of instances of non-compliant AI systems.

Implications for organizations

Organizations should consider any need to implement or update AI governance policies and practices to ensure alignment with the Principles. While primarily applicable to the broader public sector, businesses servicing or partnering with the sector—particularly those leveraging AI in doing so—should be aware of these expectations and their potential to be passed on downstream.

Footnote

1. Information and Privacy Commissioner of Ontario and Ontario Human Rights Commission, “Principles for the Responsible Use of Artificial Intelligence”, January 2026.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More