Artificial intelligence is deeply embedded in the daily workings of financial institutions, whether analyzing credit risk, automating underwriting, flagging fraud, or generating investment insights. But as AI models become more sophisticated, they also become harder to understand.
In the United States, explainability of AI in financial institutions is becoming a regulatory imperative. In 2023, the Federal Reserve, FDIC, and OCC jointly issued guidance reminding banks that use of AI and machine learning must adhere to long-standing principles of model risk management. The Consumer Financial Protection Bureau has warned lenders that they must provide “specific and accurate reasons” for adverse credit decisions, even when complex AI systems are involved. Meanwhile, the Securities and Exchange Commission has flagged potential conflicts of interest when broker-dealers use predictive analytics in retail investing.
These developments underscore that explainability is not optional in US financial markets. Decision-making processes often operate as “black boxes,” raising a vital question: if finance professionals and regulators can’t explain how an AI system reached a decision, can they trust it?
The risks of inscrutable AI aren’t theoretical. In 2024, CFA Institute found that lack of explainability was the second-most cited barrier to AI adoption among investment professionals across regions and functions. Research from EY underscores a root cause as only 36% of senior leaders say they’re investing in data infrastructure, such as quality, accessibility and governance of data, fully and at scale, meaning models often lack the data needed to produce transparent, accurate results. Data infrastructure is becoming a clear bottleneck with two-thirds (67%) of leaders admitting that lack of infrastructure is actively holding back adoption, according to EY. These gaps hinder auditability and traceability, which are the foundations of explainability that regulators and risk teams require.
The concern about lack of explainability is well placed. Credit decisions made by AI models using complex or alternative data – such as transaction history or behavioral patterns – require transparency to ensure fair treatment and regulatory compliance. Deep learning models can correlate data with creditworthiness in ways that inadvertently discriminate, even when those attributes are not explicitly part of the data.
The investment industry faces similar challenges as Generative AI and machine learning are increasingly used in the fast-growing private credit sector to help dealmakers vet
opportunities. But concerns are mounting around how biases in training data could skew investment strategies or lead to unintended, opaque outcomes.
It’s important to understand that not all stakeholders need the same explanation. Regulators want transparency and audit trails. Portfolio managers need to understand how models respond to shifting market conditions. Risk teams need insights into a model’s robustness during stress events. Meanwhile, customers want to know why their loan was denied or what influenced a pricing decision.
Meeting those diverse needs requires a flexible, human-centric approach to AI transparency. A need exists for a framework that maps explainability AI techniques to stakeholder groups, reinforcing that effective AI governance must begin with the end user in mind.
Two categories of explainability can bridge the gap between AI and human understanding. First are ante-hoc, interpretable-by-design models – such as decision trees or rule-based systems – that may sacrifice some predictive power but offer clear insights into how decisions are made. These are often preferred in highly regulated contexts where explainability can outweigh accuracy.
Second are post-hoc tools that interpret already-trained “black box” models. Among the most prominent are SHAP (Shapley Additive Explanations) and LIME (Local Interpretable Model-agnostic Explanations). SHAP, based on game theory, quantifies each input’s contribution to a prediction. LIME creates a simplified local model around a specific data point – useful for explaining individual decisions such as a loan approval or denial.
These tools are increasingly vital in areas like high-frequency trading, where decisions happen in milliseconds. Visual tools like heatmaps, partial dependence plots, and counterfactual explanations (e.g., “If income were $5,000 higher, the loan would be approved”) also make AI decisions more interpretable for both internal teams and regulators.
Despite their promise, explainability tools still carry risks. Professionals must be mindful of “algorithmic appreciation,” where users trust AI explanations too readily without critical scrutiny. Blind faith can result in poor decisions, regulatory exposure, and ethical oversights. In addition, explainability methods themselves can be inconsistent – different tools may produce different interpretations of the same decision. This ambiguity complicates efforts to build reliable standards across firms and jurisdictions.
Compounding this issue is a lack of universal benchmarks to evaluate the quality of AI explanations, making it difficult to assess whether an explanation is useful, complete, or fair.
To counter these challenges, the industry should focus on four strategies: First, regulators and industry bodies should work toward standardized benchmarks for measuring explanation quality. Second, AI explanations should be tailored by audience, delivered through accessible interfaces like dashboards, visuals, and plain-language summaries. Third, firms should invest in real-time explainability, particularly for systems making fast, high-impact decisions. Finally, AI must be viewed not as a replacement for human judgment, but as a collaborator. This means ensuring the “human-in-the-loop” principle remains embedded in financial AI systems.
Explainability isn’t just a regulatory check box or technical concern. It’s central to maintaining institutional trust, ethical accountability, and responsible risk governance in an increasingly automated industry.
If we can’t explain how these systems work – or worse, if we misunderstand them – we risk creating a crisis of confidence in the very technologies meant to improve financial decision-making. That’s a warning – and an opportunity – that no one in finance can afford to ignore.
Cheryll-Ann Wilson CFA PhD is the author of Explainable AI in Finance: Addressing the Needs of Diverse Stakeholders, a report published by CFA Institute. To read the report click here.
For business owners, the company is often more than an income source. It becomes their largest asset, their retirement plan, and in many cases, part of their identity. Advisors who understand that dynamics can deliver far greater value than traditional financial planning alone
John S. Winslow, 57, was indicted just over a year ago for his scheme to steal from an elderly client.
Hamachi's new model portfolio partnership and an industry-first solution from Vestmark join the growing wave of AI tools for wealth managers.
Meanwhile, LPL attracted a five-advisor team managing $380 million in Kansas, while a veteran with stripes from Morgan Stanley, UBS, and Fidelity has joined Prime Capital Financial.
At Goldman Sachs’ RIA conference, Dynasty’s Shirl Penney said an AI clone trained on his emails and speeches could be the first of “hundreds of digital employees.”
As technical expertise becomes increasingly commoditized, advisors who can integrate strategy, relationships, and specialized expertise into a cohesive client experience will define the next era of wealth management
Growth may get the headlines, but in my experience, longevity is earned through structure, culture, and discipline