Reliability and responsibility must be engineered into AI for financial services for resilient systems and predictable outcomes.

Financial systems may not seem like “critical infrastructure.” But the tools and systems that manage and govern the flow of money in and out of financial services providers have more in common with the mission-critical systems of transportation, energy, and manufacturing, than one might think.
Consider the possible impact of hallucinations or model drift in a national credit card system, or life insurance program, or mortgage brokerage. Incorrect or completely fabricated data sent through everything from e-commerce programs to trading floors could do irreparable harm to consumers, as well as the financial services firms themselves.
Hitachi has been keenly aware of the criticality of getting AI right in financial services. With a heritage in operational technologies and decades of development and deployment of data and AI solutions, the company applies an industrial AI approach to financial services – an approach that manifests itself well at GlobalLogic, a Hitachi Group Company.
Steeped in digital engineering and AI, GlobalLogic manages a robust financial services and consumer business that caters to global financial services firms to bring their AI aspirations to life, reliably, and responsibly.
“Hallucinations and errors caused by AI can have severe consequences in financial services,” says Scott Poby, chief technology officer for this GlobalLogic division. “Perhaps it’s not as bad as machines breaking down and causing personal injury, but from a financial perspective, it can have a severe impact on customers. Whether it’s on the trading side or the money management side, errors can scale quickly and force things to shut down. So, it is a severe impact when you’re talking about thousands of customers or more and however many millions of transactions are happening across the platform.”
To be effective, Poby says, the financial services industry must take a similar approach to engineering responsible, reliable AI, as industrials do. That begins with early ROI-focused assessment, a pilot-to-production mentality, and building an environment of trust through governance.
An eye on ROI
Any integration of AI within financial services must begin with an overall assessment of the current state of tech and AI. Once established, Poby says, you begin working backward with a clear understanding of your return on investment (ROI) goals.
“We know that a lot of our partners, and a lot of our clients, have already made investments in the AI space, so we want to go in and make sure that those investments were, a) the proper ones and, b) that they tie back to their business goals,” says Poby. “We say, okay, now you have this ecosystem of AI in your enterprise, how can we give you the most value and make sure that we identify the right use cases to leverage those tools effectively? Do we need to develop any training models for the end users of the AI?”
From there GlobalLogic benchmarks processes and begins transitioning the company to a more productive strategy for overall consumption of their AI tools and solutions. The company will bring in experts to train the client’s engineers or even manage its own teams to help them better utilize what they’ve already invested in, providing consistent feedback. From assessment to training, GlobalLogic can then show the client efficiency gains towards their stated ROI goals.
From ‘pilot purgatory’ to production
Getting to that stage, however, requires overcoming a common challenge across the industry. Many organizations can stand up new proofs of concept quickly with minimum viable products (MVPs), but very few of these projects make it into production. This can lead to large sums of money being spent to build new tools, with very little return.
“We’ve seen that maybe 80% of projects never really go beyond the pilot phase, or never scale,” Poby says. “Then, you have these investments that are starting to fall behind, and you can never get out of the cycle. I think that’s probably the biggest investment risk that I’m seeing with AI.”
Instead, to prevent so-called “MVP graveyards,” firms must identify use cases that work, and then invest in scaling those up, rather than spreading their efforts too thin. In industry, these uses might include prescriptive maintenance, fleet orchestration, and grid stability.
“We need to be able to show that our target provides a clear reduction in developer time-to-market, or a better experience for customers, or doing the same work by a smaller team to save on operational costs,” Poby says.
That may be easier said than done, however. According to a recent report, Financial Times Research: Code, Capital, and Change – The Engineering Behind Financial Transformation, commissioned by GlobalLogic, although 96% of respondents agreed that investing in modern platforms would unify their strategies, less than half said they were planning to increase their tech budgets for 2025-2026.
Trust through governance
According to the same report, financial services leaders are twice as likely to embed AI ethics and governance early in the process as well as safety certifications, compliance automation, change management, and more. It also includes human-in-the-loop checkpoints and end-to-end audit trails so that every action taken by an AI agent is explainable, reversible, and compliant.
Poby notes that upfront governance efforts help reduce risk and accelerate trust in AI-driven operations. “AI workflows need human intervention as a checkpoint and validation point,” he says. “When you’re building out a catalog of different agentic workflows, you need to define: When can we automate? And when do you need to bring in a human layer for governance? That helps make sure, if there’s any risk involved, that there’s a human eye on any decision that the AI agent makes.”
Bringing it all together
The modern financial services platform is built on a foundation of trust, governance, and risk management. In other words, just as with industrial AI, reliability and responsibility must be engineered into financial services AI at the outset to enable successful, scalable outcomes and resilient systems.
“Maybe two years back, organizations were trying to use AI for creating applications or doing legacy transformation, and the tools weren’t ready,” Poby says. “There needed to be a lot of manual intervention. Today, there has been vigorous testing cycles, so we’re more confident bringing tools into production.”
Once organizations have ensured that AI tools are reliable, they can reduce risk. “We’ve been able to look at the output of these programs and compare them to when we did things the old way, without AI support,” he says. “Today, with AI, these processes are faster, with even fewer errors.”
- Read more about the Digital Transformation in Financial Services | AI & Innovation Insights
- Read more about GlobalLogic here: www.globallogic.com.
Scott Poby is Chief Technology Officer at the Financial Services & Consumer Business at GlobalLogic, a Hitachi Group Company. GlobalLogic is a trusted partner in design, data, and digital engineering for the world’s largest and most innovative companies. Since its inception in 2000, it has been at the forefront of the digital revolution, helping to create some of the most widely used digital products and experiences.