Artificial intelligence is helping CFOs and their finance teams drive up efficiency and productivity. The AI opportunities for their broader organizations are even larger.
“There are billions of dollars to be unlocked, at a macro level, in terms of opportunity,” said Bryan McGowan, global and US trusted AI leader at KPMG US. “How that macro-opportunity will translate into individual opportunities for organizations will vary, however, based on the strategies and approaches those organizations use to capitalize on it.”
To unlock the most value, AI strategies need to be grounded in trust and accuracy.
In finance and other critical business environments, AI-powered tools and processes that usually protect data and usually produce accurate outputs aren’t good enough. To be truly useful, AI systems must consistently provide accurate answers, manage data responsibly, and comply with privacy and data protection regulations. Human oversight is essential to ensure reliability and adherence to legal, ethical, risk and regulatory standards.
Having a trusted AI strategy, and supporting it with AI readiness and assurance measures, is key to minimizing risks and driving long-term impact with AI.
Building from a trusted framework
Applying AI to finance and other business environments introduces risks ranging from hallucinations to bias and inaccuracies (with individual generative AI solutions) to the potential for security risks with data breaches and long-term reputational damage.
These risks are poised to grow as AI adoption proliferates, in both existing and new use cases.
“Overall, finance is still in the early phases of agentic AI workflows and agent-led systems,” said Brian Fields, audit transformation leader at KPMG US. “A big concern among organizations is that as agents become pervasive, it could become very hard to manage all of those agents, and make sure that they’re behaving in the right ways and that the security around them is appropriate.”
Having a trusted AI foundation addresses that concern. In the context of AI, trust encompasses considerations such as reliability, security, safety, privacy, sustainability, explainability, data integrity, transparency, fairness and accountability — which are the 10 pillars of the Trusted AI framework KPMG adheres to both internally and in its work with clients.
Validating AI claims and controls
One of the key ways that companies can gain confidence in their trusted AI foundation, across those 10 considerations, is through AI assurance.
“AI assurance is about a third-party provider coming in and providing independent assurance that an organization is conducting itself, both at the governance level and at that application or systems level, in a way consistent with trusted AI,” Fields said.
AI assurance is also about validating that the controls an organization has around its AI systems are working the way they’re supposed to, and that the claims they are making around their governance programs, or the safety, security and reliability of their AI systems, are true.
Assurance also gives organizations the opportunity to attest for third-party certifications, such as the ISO/IEC 42001 international standard.
Confirming through assurance that finance’s AI systems have the proper controls helps CFOs know that the efficiency and productivity gains AI delivers to their function don’t come at the expense of data security or quality or pose other heightened risks for their organizations.
“Most organizations are investing significant amounts of money into their AI transformation journeys,” McGowan said. “Their CFOs are looking for some validation that, one, they’re transforming with AI in a responsible way and not opening up the organization to risk, and two, they’re getting some value or ROI out of their AI investments and it’s actually worth continuing down this path.”
Take the first step toward trusted AI to drive impact
As organizations navigate their AI journeys, whether just starting out or advancing their existing AI initiatives, the path to success lies in prioritizing trusted AI. For early adopters, this means beginning with controlled experimentation, establishing governance structures and implementing human-in-the-loop controls. For those further along, it’s about adopting more sophisticated governance and monitoring systems to maintain effective control and oversight.
The future of AI in business is clear: a shift toward more autonomous AI systems and the integration of AI into workflows. To be part of this future, companies must act now. They can start by conducting thorough testing and validation of their AI systems, developing governance strategies that guide AI development responsibly, and considering initial investments in AI assurance to strengthen confidence in their AI systems.
The potential value of AI is vast, from improving productivity and reducing costs to unlocking new revenue streams through data insights. Using AI to cultivate insights from data and using those insights to identify ways to better appeal to customers is just one way to unlock new value from data. It can only be done safely and securely, however, with trusted AI at the core of the effort. As Bryan McGowan noted, “When you look at AI from a value lens, it makes governance — and by extension, trust and assurance — even more important. It’s not just an expense item or cost lever; it’s an opportunity for an organization to improve its AI systems and better understand and measure the value that AI is delivering back to the business.” By embracing trusted AI, organizations can not only minimize risks but also drive long-term impact and realize the full potential of their AI investments.
To learn more about KPMG’s strategic approach and framework to designing, building, deploying and using AI strategies and solutions in a responsible and ethical manner, click here.