- with Senior Company Executives and HR
- with readers working within the Business & Consumer Services, Insurance and Property industries
Artificial intelligence (AI) is rapidly transforming the insurance industry, prompting Canadian insurance brokers to evaluate the legal challenges associated with its adoption and ongoing use.
AI vendors: The need for thorough due diligence
Many insurance brokerages depend on third-party AI platforms rather than developing their own systems, making the due diligence process essential before entering into vendor agreements. Brokerages should verify that the vendor's AI employs unbiased data, adheres to ethical standards, and manages client information securely. Evaluations should also include the vendor's data protection practices, record of cyberattack incidents, and the transparency of the AI's decision-making process.
AI use policy: The importance of maintaining human oversight
Brokerages should also establish an internal governance framework, including revising existing policies to address AI-specific risks. An effective AI use policy should clarify roles and responsibilities, detail implementation and oversight procedures, enforce consequences for non-compliance, and, most importantly, require that professionals review all AI outputs before sharing with clients, thereby ensuring a "human-in-the-loop" approach.
Brokers or vendors: Who is responsible for biased outcomes?
Responsibility for biased or erroneous AI-driven outcomes remains unsettled in Canadian law, requiring a case-by-case analysis. Current guidance suggests, however, that brokers remain ultimately accountable for reviewing and overseeing AI outputs, while vendors may bear liability if they fail to meet their contractual obligations to brokerages.
Regardless of how AI is employed, it ultimately cannot replace a broker's professional judgment or conduct. Regulators will continue to hold licensees to the same standards. For example, RIBO's guidance on the use of AI reminds broker licensees that they must uphold standards outlined in the Fair Treatment of Customers and the Code of Conduct Handbook when using AI.
Such principles include requirements to:
- Be competent;
- Act with integrity and in the client's best interests;
- Disclose any conflicts of interest;
- Protect privacy and consumer data; and
- Maintain client confidentiality.
Transparency and accountability: Practical strategies for reducing liability
To minimize liability when deploying AI tools, brokers should clearly inform clients about how AI is involved in the decision-making process and provide them the option to interact with a human advisor if they wish. It is essential for brokers to accept responsibility for outcomes generated by AI, as legal accountability cannot simply be shifted onto the technology itself.
Education and training: An ongoing commitment
Brokers should invest in ongoing education and training to recognize AI's influence on outcomes and associated risks. Understanding privacy implications, especially when using generative tools like ChatGPT, is vital to avoid unintended breaches and departures from the standard of care.
Privacy law: Staying compliant in an evolving landscape
With no single Canadian AI law, brokers must follow federal and provincial privacy statutes. Under federal law, brokerages remain responsible for client data, even when processed by third parties, and must uphold key privacy principles such as accountability, consent, and data protection.
Leading the way: Brokers shaping the future of AI
Similar to other professionals, brokers will benefit from proactively incorporating AI into their practices, positioning themselves as leaders, while ensuring their expertise remains central to client interactions and decision-making. Note that AI governance remains a key aspect of BLG's Artificial Intelligence practice.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.