top of page

The AI Investment Litmus Test: 4 Questions to Ask Before Spending a Dollar

Updated: Oct 14

Imagine this: A senior executive recently confessed their biggest fear. It wasn't a market downturn or a new competitor. It was their upcoming board meeting, where they’d inevitably be asked, "So, what is our AI strategy?" Their company had allocated millions for "AI transformation," but the fund sat largely untouched. Why? Because every proposal that crossed their desk felt like a solution in search of a problem—expensive, complex, and disconnected from the P&L.


This scenario is playing out in boardrooms everywhere. The pressure to "do something with AI" is immense, leading to what some have termed "AI washing," where companies relabel old projects with a trendy acronym. As studies from firms like McKinsey have shown, a significant percentage of AI projects fail to deliver on their promised ROI, not because the technology is flawed, but because the strategy is absent.


To cut through the hype and avoid costly missteps, leaders don't need to become data scientists. They need a simple, non-technical framework for evaluation. Before you approve any AI initiative, subject it to this four-part litmus test.


ree

Question 1: "Are we solving a speed, scale, or scarcity problem?"

The most common mistake is to start with the technology. Instead, start by defining the business case in one of these three categories. This forces clarity on why you are pursuing the project in the first place.


Speed

These projects aim to dramatically accelerate existing processes. The goal isn't to do something new, but to do something necessary, faster. For example, a financial services firm might use an AI model to reduce its loan approval process from three weeks to three minutes. The outcome is the same (a decision) but the speed creates a massive competitive advantage.


Scale

These projects are designed to break through human limitations on volume. They handle tasks that are too massive for any team to manage effectively. For example, a global retailer could deploy an AI-powered chatbot to handle 2 million customer service inquiries a month, a scale impossible to achieve with human agents alone, while freeing those agents up for the most complex cases.


Scarcity

These projects address a talent or resource bottleneck. They use AI to perform a specialized skill that is rare, expensive, or difficult to hire for. For example, a pharmaceutical company could use an AI platform to analyze molecular structures in drug discovery, augmenting the work of a small team of highly sought-after PhDs and exploring more possibilities than they ever could alone.


If a project can't be clearly defined as solving for speed, scale, or scarcity, it’s likely a vanity project, not a strategic investment.


Question 2: "Where does the human add value?"

The narrative of "AI replacing jobs" is far less relevant inside an organization than the reality of "AI changing jobs." A successful AI initiative doesn't just plug in technology; it strategically redesigns the workflow around a human-machine partnership. Before signing off, demand a clear answer to where human oversight, judgment, and expertise will be applied.

This is the principle of "human-in-the-loop" design. The goal isn't full automation; it's elite augmentation.

  • Vague Plan: "AI will generate the quarterly market analysis report."

  • Strategic Plan: "AI will analyze raw sales data and competitor announcements to generate a first draft of the quarterly market analysis. Our senior strategist will then spend her time on the final 20%, interpreting the data, adding strategic insights, and crafting the executive narrative."


The second plan recognizes that the human’s value isn't in computation, but in interpretation and judgment. Insisting on this clarity prevents the deployment of brittle, black-box systems and ensures you are elevating your talent, not attempting to replace it.


Question 3: "How will we measure success?"

Peter Drucker’s adage, "What gets measured gets managed," is the final gate for any AI investment. Too many projects are greenlit on vague promises of "improving efficiency." A CFO-friendly project has crystal-clear, quantifiable KPIs. Force your team to articulate the "before" and "after" in a single sentence.

  • Vague Goal: "We will use AI to improve our marketing efforts."

  • Measurable Goal: "This project will reduce our average customer acquisition cost by 15% within two quarters by using AI to optimize ad spend in real-time."


This exercise does two things. First, it ensures that baseline data is captured before the project begins—a step that is shockingly often missed. Without a "before," you can never prove the "after." Second, it moves beyond vanity metrics to focus on long-term gains like productivity boosts and cost savings, providing the board with an unambiguous benchmark for tracking ROI.


Question 4: "What is our ethical failsafe?"

An AI model is only as good as the data it's trained on. Without an explicit check for fairness and bias, even well-intentioned projects can create significant reputational and legal risks. This question ensures that ethical guardrails are part of the initial design, not an afterthought.


Ask your team: "Where is human oversight required to ensure fairness?" For example, an AI tool might be used to screen job applications, but the final shortlist must be reviewed by a human hiring manager to mitigate the risk of algorithmic bias against certain demographics. Mandating this check ensures that AI is used as a tool to assist, not replace, human judgment in sensitive areas.


The Main Takeaway

Don't buy AI; buy a business outcome. By asking these four questions—focusing on the Problem (Speed, Scale, Scarcity), the Process (Human Value), the Payoff (Measurement), and the Principle (Ethics)—leaders can transform the vague, anxiety-inducing pressure to "invest in AI" into a disciplined, strategic process focused on creating tangible value.


Copyright © 2025 by Arete Coach LLC. All rights reserved.

Comments


bottom of page