All articles

Legacy Banks Slow AI Rollouts To Meet Rising Regulatory And Explainability Demands

The Security Digest - News Team
Published
April 13, 2026

Armel Roméo Kouassi, Senior Vice President at Northern Trust Corporation, on the limitations of the "move fast and break things" mindset toward AI in ultra-high-worth global banking.

Credit: The Security Digest

Key Points

  • Legacy banks struggled to adopt artificial intelligence, adopting the "move fast and break things" mentality of startups, due to massive regulatory and architectural hurdles.

  • Armel Roméo Kouassi, Senior Vice President and Global Head of Asset Liability Management at Northern Trust Corporation, works in a highly regulated environment where algorithmic errors or data leaks could have severe political and financial consequences.

  • Because of emerging threats related to synthetic data and a digital herd mentality, the banking sector maintains human oversight as a mandatory structural control mechanism to challenge models.

In banking, explainability is critical. You have to justify why a model made a decision, why a loan was denied, why a trade was made.

Armel Roméo Kouassi

SVP & Global Head of Asset Liability Management

Armel Roméo Kouassi

SVP & Global Head of Asset Liability Management
Northern Trust Company

Startups have always had greater agility when it comes to new tech, and it's not surprising that they take a "move fast and break things" approach to artificial intelligence. Massive legacy banks cannot, however. In a legacy banking environment, deploying a new model ultimately means defending its decisions to federal supervisors: explaining why a loan was denied, why a trade was executed, or why a pricing model behaved as it did. The technology is moving fast, but its use remains tightly bound by governance, compliance, and risk appetite.

We spoke with Armel Roméo Kouassi, an executive operating right in the middle of that tension. As Senior Vice President and Global Head of Asset Liability Management at Northern Trust Corporation, he holds fiduciary responsibility for a nearly $3 billion ERISA-compliant employee benefit plan portfolio. In his world, deploying these tools at scale means being able to justify algorithmically driven decisions to federal regulators if and when they are scrutinized, proving both that a model works and that it can be owned, explained, and controlled.

"In banking, explainability is critical," says Kouassi. "You have to justify why a model made a decision, why a loan was denied, why a trade was made." Accordingly, integrating a new model into a fragmented architecture requires aligning platforms, data, and controls across multiple regions and business lines. Major initiatives typically need to be designed with regulatory expectations in mind.

  • Making a federal case: Accordingly, Kouassi stresses the importance of explainability, especially when regulators demand a clear trail showing why a model made a specific decision. "The regulatory and compliance side is a bottleneck because we are the most-regulated bank. For anything we do, we almost have to make the business case for the Federal Reserve."

  • The boss and the black box: For some, that scrutiny is also reshaping executive roles, where business leaders responsible for performance often find themselves accountable for the behavior of these models. "If processes are redesigned so AI does everything, as the global head of ALM, I might become not only an owner in charge of people, but also a sort of technologist, the owner of the technology and that black box."

Because data serves as core operational fuel, many major banks tend to move cautiously with open or general-purpose platforms. Before an enterprise-wide system is rolled out, it typically undergoes a lengthy governance and compliance process to define what information the system can access, how it learns, and how third-party risk will be managed. Deploying a new asset management platform can involve months of security and privacy assessments before production use. Yet, much of the most mature activity in banking has focused on back-office fraud detection and internal cybersecurity defense.

  • Silence is golden: In wealth management for ultra-high-net-worth and institutional clients, the stakes around data and advice are particularly high. A compromised system is a political scandal as much as it is a financial loss. That sensitivity helps explain why many institutions in this segment limit customer-facing automation, investing heavily instead in cloud compliance and data masking. "If we lose the information of one of the big guys and that goes to the public, it may have political consequences. Even internally, sometimes when we speak to some clients, we don't mention the name in the conversation."

Fraud detection, transaction pattern analysis, and cybersecurity defense have become the leading edge of mature AI deployment in banking—use cases where the models operate on internal data, the outputs are verifiable, and the human-in-the-loop oversight structure is well established. And Kouassi sees agentic AI eventually automating vast swaths of treasury functions, including portfolio construction, yield optimization, and capital allocation under constraints. The mathematical building blocks already exist. Decades of algorithmic trading and quantitative optimization have shown that machines can handle difficult calculations.

  • Premium unleaded data: Kouassi frames data as the essential input that powers every profitable decision a bank makes, from understanding clients to segmenting them to tailoring products, and insists that without data integrity, no model can deliver sustainable outcomes. "For us to make a profit, it's all about data," Kouassi said. "It's how we make rational, profitable, sustainable decision-making."

  • Drafting the playbook: Industry bodies and internal risk functions are actively working on principles for responsible use. The broader supervisory conversation is also moving, and banks and financial bodies are developing frameworks to embed AI governance within existing risk management standards. "We're navigating through a gray area for now, but industry groups like the American Bankers Association are thinking through AI and providing advice to the Fed in terms of frameworks. When the Federal Reserve itself comes up with a clear framework of guidance, I think we are smart enough within the banking industry to take it to the next step."

Traditional defenses may not directly address the emerging risk that models could begin to generate synthetic data that is later reused as if it were real. Treat this strictly as a data-provenance and integrity issue: if organizations struggle to reliably distinguish between original and machine-generated data, behavioral models could drift in ways that are hard to detect. The tactical solution is strict data labeling.

  • Poisoning the well: If multiple large institutions rely on similar underlying models for liquidity management, pricing, or hedging, their behavior could become tightly correlated. To mitigate that kind of risk, risk committees and regulators are actively focusing on model diversity and correlated-behavior stress testing. "AI can also create data and information synthetically. That's another issue because synthetic data will spread out and contaminate real data. We might live in a world where all the data we see is contaminated."

  • Humans break the loop: As these tools become more embedded, Kouassi worries less about single-model failures and more about how models interact across institutions. On a trading floor, human portfolio managers can see market information, gauge sentiment, and still exercise independent judgment. "When I go to the market, plug in my Bloomberg screen, I have a feeling of what the desk is doing, but I have my judgment. If everything is programmed in AI in the same platform, that might lead us to take a sort of herd mentality that will create systemic risk or market instability."

Having a human in the loop isn't a heroic stand against machines; it is simply a mandatory, structural control mechanism in a highly regulated industry. It serves as a standard operational override designed to prevent automation complacency. The promise of automation is real, but in legacy finance, it will always be balanced by the operational requirement to explain, document, and occasionally overrule the machine.

Kouassi identifies the greatest near-term danger as overtrust, or teams relying on AI outputs without challenge, allowing automation complacency to take root in functions where errors carry enormous consequences, all while removing that crucial human component. "Managing your ALM or your liquidity, your balance sheets, relying on AI without any challenge, without a human in the loop? The mitigation for that is having a human in the loop."