Artificial intelligence poses a curious paradox for financial institutions. To leverage AI’s full potential, bankers must first build their own trust in the technology. Many struggle with the mental leap: to trust a machine to improve human interactions.
I often tell skeptical executives that, ironically, bankers must create trust with AI to deepen trust with account holders. If you can trust it — and operate within the right regulatory framework — you enhance your ability to be smarter and faster, ultimately creating better customer relationships.
IDC predicts that banking and financial services will account for more than 20% of all artificial intelligence spending between now and 2028. As AI integrates into banking operations, FIs face a growing need to balance innovation with risk management. Most analysts place AI productivity plateaus at two to five years away, but institutions that embrace a measured, step-by-step approach today will be better positioned for success now and well into the future.
Why Bankers Hesitate to Trust AI
Banking’s cautious approach to AI isn’t without justification. AI systems, particularly generative AI, can produce “hallucinations” — plausible but fictional information presented as fact. When dealing with customer financial data, such inaccuracies are as dangerous as they are unacceptable.
In conversations with a multinational tech company about acceptable accuracy rates, we discussed what threshold financial services companies need before fully deploying AI solutions.
While current AI systems achieve approximately 80% accuracy, FIs typically require 96% or higher before removing human oversight.
This accuracy gap explains why banking’s use of AI remains focused on augmenting human capabilities rather than autonomous decision-making. Until we bridge this gap, keeping humans involved in AI processes isn’t just preferred — it’s essential.
Building a Trust Framework
Bankers should assess their ability to trust AI across a multidimensional framework. Without such structure, they risk deploying technologies that damage customer relationships rather than enhance them. With that consideration in mind, here is a five-point framework through which bank decision-makers can evaluate AI’s potential.
- 1. Transparency
If I can’t see what’s going on, how can I trust it? Until AI can consistently justify its decisions, human oversight remains critical. For example, when using AI to assist call center representatives, the system should clearly indicate its reasoning so representatives can determine whether it correctly interpreted customer needs. - 2. Fairness
Is the AI making recommendations or decisions that are equitable across customer segments? FIs must ensure AI systems don’t inadvertently discriminate against certain populations. They need robust testing to verify that AI lending recommendations, for instance, don’t perpetuate historical biases in credit access. - 3. Security
How well is the AI system protected against manipulation? How securely does it handle sensitive financial data? As FIs deploy AI across more touchpoints, securing these systems becomes increasingly complex. Yet it’s another area in which AI is showing promise, monitoring for unusual system access patterns and reducing the risk of compromised access credentials. - 4. Privacy
Does the AI system respect and protect customer data according to regulatory requirements and customer expectations? Banks must ensure that AI implementations uphold the stringent privacy standards that the industry demands. - 5. Accountability
Does the technology provide mechanisms for verification, auditing and exception handling? Without clear accountability structures, banks can’t demonstrate compliance or identify improvement opportunities. When properly established, accountability systems allow banks to stand up to regulatory scrutiny by demonstrating that “AI is making decisions as a human would make.”
Navigating the Evolving Regulatory Landscape
While specific AI banking regulations are still developing, FIs can leverage existing frameworks. The Fed’s 2011 Supervisory Guidance on Model Risk Management (SR 11-7) addresses “the possible adverse consequences of decisions based on models that are incorrect or misused” through effective validation and governance.
Regulators increasingly rely on the NIST AI Risk Management Framework with its four interconnected functions — Govern, Map, Measure and Manage — which aligns with the trust framework. At the state level, Colorado has enacted legislation requiring transparency for high-stakes AI applications, including lending.
As regulations evolve, the prudent approach is implementing AI with robust governance that builds compliance capability ahead of specific mandates.
From Framework to Implementation
Banks’ AI adoption should follow a phased approach that balances innovation with caution.
- Phase 1: Internal deployment with humans in the loop
Begin by using AI to streamline existing processes rather than replacing them. For example, I’ve observed institutions using AI to analyze training documentation and create tailored content for specific positions — allowing new employees to access relevant information without wading through thousands of pages. This internal focus provides a safe environment to build organizational trust and expertise. - Phase 2: Enhance customer experience with human supervision
Once internal applications have proven successful, gradually infuse AI into customer-facing products and experiences, but maintain robust human oversight. An effective implementation involves AI listening to call center conversations, transcribing them in real time and suggesting potential solutions from the knowledge base. The customer service representative retains control, accepting helpful suggestions while overriding inaccurate ones. - Phase 3: Increased automation with proper governance
As trust and capabilities grow, FIs can explore more autonomous implementations. This progression should be guided by clear business objectives, robust governance policies and continuous exploration and assessment. - Test against the framework dimensions
Test explicitly against each dimension of the trust framework. Does the solution provide sufficient transparency? Is it demonstrably fair? Have security and privacy been adequately addressed? Is there clear accountability?
The Trust Dividend
When banks successfully implement trusted AI systems, they create what I call the “trust dividend” — enhanced customer relationships that would be impossible without technology.
Consider how AI augments interactions when a customer visits a branch. By analyzing account activity patterns, AI can identify potential attrition — like gradually decreasing balances — and suggest personalized retention strategies. The banker doesn’t just see a transaction; they see context that enables more meaningful conversation and deeper engagement.
Similarly, AI can analyze community demographics and transactions to identify unmet financial needs, allowing community banks to develop highly targeted products that demonstrate deep understanding of their communities, achieving product market fit that is backed as much by data as it is by the bank’s historic knowledge of the region it serves.
Consider the many possibilities for bank decision-makers that make the mental leap and successfully navigate the trust paradox: product optimization, fraud detection, help search capabilities, code testing, security monitoring, customer support. Each represents an opportunity to build trust with the technology itself and, in turn, deepening your account holders’ trust in you.
Author: Jeff Brown, VP of technology strategy and architecture at CSI