Your AI Business Case Excludes the Real Costs (Because You’re Counting Purchase Orders, Not Activities)
- Admin

- Feb 11
- 2 min read

Most AI business cases still feel “clean” because they’re built from what’s easy to count: licenses, a project team, maybe some IT time.
That’s not AI economics.
AI economics don’t live in purchase orders. They live in activities—the recurring work required to keep the tool producing usable, decision-grade outputs at scale.
The costs you don’t see (until you can’t ignore them)
If you want total cost of ownership (TCO), stop asking “what did we buy?” and start asking “what work did we create?”
Here’s what standard ROI models routinely miss:
Data profiling, cleaning, labeling, stewardship — ongoing work to keep inputs stable
Integration + process redesign — often larger than the model itself
Change management + adoption friction — training, parallel runs, rework
Human-in-the-loop + exception handling — backstops, escalation paths, QA sampling
Model operations + monitoring — drift detection, retraining cycles, performance management
Vendor economics surprises — usage-based billing that doesn’t scale linearly
Organizational drag — top talent pulled into firefighting instead of improvements
The meta-problem: traditional ROI counts what’s easiest to count. AI economics are activity-based.
The PACE test: if you can’t model the work, you can’t model the economics
A business case models the purchase. Reality demands you fund the operation.
If you can’t explain what the organization must do repeatedly to keep the AI producing trustworthy outputs, you’re not measuring ROI—you’re hoping the environment stays friendly.
So here’s the discipline I want CFOs (and boards) to adopt in 2026:
Don’t ask: What did the tool cost? Ask: What activities does this create, shift, or sustain?
That’s where the money is.
A practical template: an AI Driver-Based Cost Map
Require a one-page activity map in every proposal:
1) Activities createdMonitoring, QA sampling, exception handling, escalation paths, governance rituals
2) Activities shiftedWork moved from frontline teams to support functions (data stewardship, controls, analytics engineering)
3) Activities sustainedData quality routines, drift monitoring, retraining cadence, auditability, security hardening
Then attach drivers:
exceptions per 100 cases
minutes per exception
drift events per month
retrains per quarter
audit sample failure rates
That converts ROI from a story into a model.
What boards should ask before approving the next AI budget
Three questions that reveal whether the economics are real:
What work stops—specifically—and who will enforce that it stops?
What activities increase with scale (especially exception handling and oversight)?
What early indicators tell us costs are drifting upward before we expand?



Comments