4 CFO tips for demystifying AI hype
Recent breakthroughs in artificial intelligence have dazzled C-suite executives seeking to personalize marketing, juice sales, anticipate customer needs and identify unseen risks.
The explosive spread of Google’s Bard, Microsoft’s Bing chat and OpenAI’s ChatGPT has also triggered backlash. Critics pan AI chatbots as threats to jobs, tools for “deep fake” manipulation and fountains for insights that, although seeming trustworthy, are prone to inaccuracy and bias.
The chatbots pose “profound risks to humanity,” Tesla CEO Elon Musk, Apple co-founder Steve Wozniak, and more than 26,000 other signatories said in an open letter released last month.
AI developers are “locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict or reliably control,” the open letter says. The signatories call for a six-month pause in developing systems more powerful than ChatGPT-4 and for creation of AI governance structures.
CFOs weighing adoption of AI tools need to look past both the hype and anxiety. They face the challenge of weaving the technology into company operations and reaping immeasurable rewards while not shirking their obligation to limit risk and ensure a worthwhile return on investment, according to AI experts and finance executives.
“The challenge for CFOs right now is that the frenzy around generative AI puts pressure on them to roll this out at scale,” according to Tad Roselund, a managing director at Boston Consulting Group. “You have this commercialization pressure to explore all the benefits of generative AI.”
CFOs need to demystify AI tools — identifying low-risk, high-reward uses — when even the tools’ creators cannot always explain the reasoning behind insights into core strategic topics such as emerging risks, capital allocation and new market opportunities, the AI experts and CFOs said.
To avoid failure when adopting the newest AI, CFOs should try to find a “balance of being urgent but being deliberate,” Roselund said in an interview.
AI for the rest of us
Generative AI and conversational AI such as ChatGPT — both forms of machine learning — widely expand access to advanced computing.
The software can write college essays, computer code, market research reports, jokes, translations, blogs, legal briefs and help in medical diagnosis and drug discovery. They sweep away technical and developmental obstacles and “democratize” AI, offering front-line employees computing power once available to a few, Roselund said.
Broader access has spurred a record adoption rate for conversational AI. ChatGPT hit 1 million users in less than a week, compared with 10 weeks for Instagram and 20 weeks for Spotify, according to DiploFoundation and KPMG.
The market for conversational AI will surge to as much as $20 billion by 2025, or 20% of total AI spending, according to UBS.
The potential market for generative AI may total $150 billion compared with $685 billion for the software industry, Goldman Sachs said. During a 10-year period generative AI may add 7% growth to global gross domestic and push up productivity by 1.5 percentage points.
Use of AI tools has boomed as OpenAI, Microsoft, Google and other AI innovators fight for market share. By providing free or low-cost access, they can vacuum up oceans of user feedback to improve their algorithms and gain an edge.
New AI tools help CFOs analyze data, make financial projections, prepare financial statements, manage risk and more easily supervise treasury and accounting tasks. Employees freed from routine work can turn to more creative, fulfilling projects.
Adobe, Shopify, Instacart and Zoom have adopted generative AI tools. Salesforce in March said it planned to integrate ChatGPT into Slack and meld generative AI into its customer relationship software.
Walmart uses chatbots to inform customers about simple questions including order status and returns. With verbal queries using the “Ask Sam” app, employees can locate products for sale, look up prices and check their messages or work schedule.
CFOs, at the risk of jarring employee morale, can use AI tools to trim headcount. AI automation may eventually shake up to some degree roughly two-thirds of U.S. occupations, Goldman Sachs said.
A list of at-risk jobs underscores the power of the technology to disrupt the workplace.
So-called large language models such as ChatGPT imperil jobs for workers in dozens of professions, including accountants, auditors, financial quantitative analysts, blockchain engineers, interpreters, mathematicians and journalists, according to a study by researchers at the University of Pennsylvania and OpenAI.
The technology over time will streamline at least 10% of the tasks performed by 80% of workers, and half of the tasks done by 19% of workers, the researchers said.
CFOs should not adopt generative or conversational AI without first sizing up its many risks, the financial executives and AI experts said. Some AI tools may end up on the list of much-maligned innovations that, at high cost to early adopters, over-promised and underperformed.
“AI has the potential to be yet another technology in that rogues’ gallery, especially with so much speculation that these AI systems could end up replacing workers,” according to the article describing survey findings by MIT Sloan Management Review and BCG.
AI on the sly
AI tools pose several risks. First, the spread of generative and conversational AI may undermine a company’s ability to manage or catalog AI use.
Employees who retain their jobs after adoption of the new technology will potentially gain access to vast computing power. They may engage in “shadow AI,” or computing without company oversight, and either mistakenly or deliberately leak proprietary or customer data.
Second, companies unable to fully explain how AI tools generate insights may face scrutiny from regulators, lawmakers, shareholders and other stakeholders. The “black box” hazard is especially problematic for asset managers and other AI users with fiduciary obligations.
Third, AI tools expose a company to litigation or a reputational setback by engaging in “hallucination.” They may access flawed or biased data, come to erroneous conclusions and perpetuate inaccuracies or discrimination and stereotyping.
OpenAI acknowledged last month that the accuracy of ChatGPT-4 in business applications is less than 80%. Google says in a disclaimer that “Bard is experimental, and some of its responses may be inaccurate, so double-check information in Bard’s responses.”
Fourth, wrongdoers both within and outside a company could use AI tools to improve deep fakes and other misinformation, or for schemes at phishing, impersonation and intellectual property theft.
Regulators in the U.S., Europe, China and other countries have proposed setting guardrails around AI tools. UNESCO’s 193 member states have unanimously endorsed a framework for averting AI abuses.
“The world needs stronger ethical rules for artificial intelligence,” UNESCO Director-General Audrey Azoulay said in a statement last month. “This is the challenge of our time.”
The Commerce Department this month requested public comment on ways to limit risks of AI. “Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms,” Alan Davidson, Assistant Secretary of Commerce for Communications and Information, said in a statement.
CFOs can limit risks and tap the benefits of AI tools, financial executives and AI experts say, by taking four steps:
1. Ensure “responsible AI”
The newest AI tools highlight the imperative of constructing safeguards against abuse, the CFOs and AI experts said.
“You need responsible AI on steroids” and injected at all levels of a company, according to Roselund. “The tone from the top is incredibly important,” he said, underscoring a need to make a senior executive fully accountable for the outcome.
Responsible AI, as described by BCG, KPMG and other consultants, also ensures that the technology serves a broad range of stakeholders; mines high-integrity data; protects against attacks and rogue use; shields user data; avoids harming people, property or the environment; and is explainable, transparent and reliable.
A company adopting AI should create a cross-functional oversight team including data scientists, attorneys and the leaders of the company’s various departments.
The team should uphold standards for testing and quality control and regularly gauge risks, aware that wider use of AI within the company will require agility and shorter response times.
Employees throughout a company should be free to innovate with AI tools and find new uses for the technology but within clear boundaries, Roselund said. They need to “think outside the box but inside the circle.”
2. Widen the conventional concept of ROI
By posing unusually high benefits and risks, AI tools raise the stakes of ROI measurement, the CFOs and AI experts said.
“AI — for all its hype — still has to prove itself,” SymphonyAI CFO Wayne Kimber said in an interview. “It’s got to go from a science project with a data science engineer to something tangible for business.”
So far the payoff from AI is far from universal. While 37% of executives in 100 countries and 20 industries said their companies derive value from AI, 30% said that their businesses do not, according to 1,741 respondents to the survey by MIT Sloan Management Review and BCG.
“Aligning the achievement of individual and organizational value from AI remains a work in progress,” the researchers said, cautioning against burdening employees with the task of serving the machine.
When gauging ROI, CFOs need to recognize that valuable uses for AI tools will unexpectedly crop up as the software analyzes mountains of data, Aible CEO Arijit Sengupta said in an interview.
“People need to flip their thinking, their traditional approach to setting up ROI goals and expectations for projects that were based on a deterministic world,” he said. “Until you have actually trained the AI on the data, you have no way of knowing how to paint a nice, beautiful picture beforehand.”
CFOs derive the biggest payoff from AI tools by focusing first on increasing revenue, which is comparatively easy to measure, Sengupta said. Next, they should aim to cut costs and limit risks.
3. Adjust to AI’s limitations
OpenAI is candid about the weaknesses of ChatGPT-4, the newest version of its software.
“Despite its capabilities, GPT-4 has similar limitations as earlier GPT models,” OpenAI said in a report last month. “Most importantly, it still is not fully reliable (it ‘hallucinates’ facts and makes reasoning errors),” OpenAI said. It suggested “avoiding high-stakes uses” that lack additional context or review by human supervisors.
The creators of AI tools will gradually eliminate hallucination as they work with their clients on focussed applications, Sengupta said, adding that Aible offers software that aims to spotlight errors by double-checking answers from the AI tools.
“The hallucination problem — use-case by use-case — will get solved,” he said. “But it has to be solved before enterprises can use it.”
4. Start small
CFOs should avoid delay in sizing up AI tools given the opportunity to gain a competitive edge — and the risk of losing market share, the CFOs and AI experts said.
“Organizations need to move quickly to have a clear vision and a transformation, program-led approach,” according to Mukund Kalmanker, global head of AI solutions at Wipro.
At the same time, CFOs can maximize returns and avert waste by initially focusing on proven AI tools on a limited scale, according to Kimber. They should “line up a company with sector knowledge,” he said, “not some stealth skunkworks.”
“We’re not saying to customers, ‘Buy our platform and look for the needle in a haystack,” Kimber said. “We’re saying, ‘Hey, retail customer or fintech customer, we’ve already identified use cases that work.’”
CFOs who take an uninformed approach may push the technology into the same “hype cycle” that has disrupted the adoption of innovations for decades, the financial executives and AI experts said.
“The hype cycle in technology is well known,” even within just the AI subsector, Sengupta said, noting the C-suite mood swings over expert systems and automated machine learning.
“People get excited, then they over-invest, then they lose a lot of money, then they get unhappy, then they under-invest,” he said. “Now we’re seeing it with generative AI.”