- within Cannabis & Hemp and Insolvency/Bankruptcy/Re-Structuring topic(s)
On January 27, FINRA released a discussion of agentic AI, describing how member firms are beginning to experiment with autonomous AI systems and identifying supervisory considerations associated with those early deployments.
FINRA noted that unlike traditional automation tools, AI agents may operate across multiple systems and data sources with varying levels of independence, raising questions about how existing supervisory and governance frameworks apply to tools that can act without continuous human input.
Based on its risk monitoring and engagement with member firms, FINRA identified several risk areas associated with the use of agentic AI. Key risks highlighted by FINRA include:
- Autonomy, scope, and authority risks. AI agents may initiate actions without meaningful human validation or act beyond their intended scope or user authority if boundaries and approval mechanisms are not clearly defined and enforced.
- Auditability and explainability challenges. Multi-step reasoning and decision-making processes can make agent behavior difficult to trace, explain, or reconstruct, complicating supervision, testing, and post-incident reviews.
- Data governance and confidentiality risks. Agents operating across systems and datasets may inadvertently store, explore, disclose, or misuse sensitive or proprietary information.
- Model design and domain-knowledge limitations. General-purpose agents may lack the specialized expertise needed for complex financial services tasks, and poorly designed objectives or reinforcement mechanisms may lead to outcomes misaligned with investor or market interests.
- Persistent generative AI risks. Bias, hallucinations, and privacy concerns remain present and may be amplified when AI systems operate with increased autonomy.
Putting It Into Practice: FINRA's observations reiterate that financial institutions remain responsible for supervising AI-driven activities, even where tools operate with significant autonomy and without new, technology-specific requirements. Institutions considering agentic AI should evaluate whether existing supervision, escalation, documentation, and data governance controls are sufficient for systems that can independently plan and act.
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.