ARTICLE
11 February 2026

FINRA Highlights Supervisory Risks And Use Cases For Agentic AI In Financial Services

SM
Sheppard, Mullin, Richter & Hampton LLP

Contributor

Businesses turn to Sheppard to deliver sophisticated counsel to help clients move ahead. With more than 1,200 lawyers located in 16 offices worldwide, our client-centered approach is grounded in nearly a century of building enduring relationships on trust and collaboration. Our broad and diversified practices serve global clients—from startups to Fortune 500 companies—at every stage of the business cycle, including high-stakes litigation, complex transactions, sophisticated financings and regulatory issues. With leading edge technologies and innovation behind our team, we pride ourselves on being a strategic partner to our clients.
On January 27, FINRA released a discussion of agentic AI, describing how member firms are beginning to experiment with autonomous AI systems and identifying supervisory considerations associated...
United States Finance and Banking
Sheppard, Mullin, Richter & Hampton LLP are most popular:
  • within Cannabis & Hemp and Insolvency/Bankruptcy/Re-Structuring topic(s)

On January 27, FINRA released a discussion of agentic AI, describing how member firms are beginning to experiment with autonomous AI systems and identifying supervisory considerations associated with those early deployments.

FINRA noted that unlike traditional automation tools, AI agents may operate across multiple systems and data sources with varying levels of independence, raising questions about how existing supervisory and governance frameworks apply to tools that can act without continuous human input.

Based on its risk monitoring and engagement with member firms, FINRA identified several risk areas associated with the use of agentic AI. Key risks highlighted by FINRA include:

  • Autonomy, scope, and authority risks. AI agents may initiate actions without meaningful human validation or act beyond their intended scope or user authority if boundaries and approval mechanisms are not clearly defined and enforced.
  • Auditability and explainability challenges. Multi-step reasoning and decision-making processes can make agent behavior difficult to trace, explain, or reconstruct, complicating supervision, testing, and post-incident reviews.
  • Data governance and confidentiality risks. Agents operating across systems and datasets may inadvertently store, explore, disclose, or misuse sensitive or proprietary information.
  • Model design and domain-knowledge limitations. General-purpose agents may lack the specialized expertise needed for complex financial services tasks, and poorly designed objectives or reinforcement mechanisms may lead to outcomes misaligned with investor or market interests.
  • Persistent generative AI risks. Bias, hallucinations, and privacy concerns remain present and may be amplified when AI systems operate with increased autonomy.

Putting It Into Practice: FINRA's observations reiterate that financial institutions remain responsible for supervising AI-driven activities, even where tools operate with significant autonomy and without new, technology-specific requirements. Institutions considering agentic AI should evaluate whether existing supervision, escalation, documentation, and data governance controls are sufficient for systems that can independently plan and act.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.

[View Source]

Mondaq uses cookies on this website. By using our website you agree to our use of cookies as set out in our Privacy Policy.

Learn More