Boards Approve AI on Unstated Assumptions (And Accountability Turns Into Politics)
- Admin

- 5 days ago
- 2 min read
When AI investments underperform, the post-mortem usually devolves into debate:
“The business didn’t adopt it.”
“The data wasn’t ready.”
“IT didn’t integrate it.”
“The vendor overpromised.”
All of that may be true.
But the deeper problem is governance: the investment was approved on assumptions that were never made explicit—so nobody can own them, monitor them, or trigger decisions when they break.
When assumptions remain implicit, accountability becomes politics. Everyone can claim success. No one can explain failure.
The eight assumptions hiding inside most board decks
Here are the recurring assumptions embedded in approvals everywhere:
The use case reflects true unit economics (not “hours saved” with no monetization mechanism)
Data is fit for purpose—and will stay that way (drift won’t erase value quietly)
Adoption will happen (trust, workflow redesign, and no permanent parallel runs)
Operating costs won’t quietly explode (monitoring, retraining, exception handling, security hardening)
Risk is bounded and governable (guardrails, oversight, traceability, clear accountability)
Vendors won’t become the strategy (pricing + roadmap risk won’t trap you)
Value can be measured credibly (outputs vs outcomes, attribution, avoiding optics)
Benefits are stable and linear (they won’t decay without sustained funding)
If even two of these fail, the business case changes materially.
So governance isn’t “did we deploy?”Governance is “did the assumptions hold—and did we act when they didn’t?”
The fix: require an Assumption Register (with triggers)
If boards want real accountability, require an Assumption Register with:
10–15 assumptions the investment depends on
leading indicators for each
explicit triggers for: scale, pause, redesign, or kill
This is how you turn AI from hope-based budgets into disciplined capital allocation.
What leading indicators look like in practice
Adoption assumption → parallel-run rate, override rate, process compliance
Data stability assumption → drift signals, missingness spikes, validity rates
Cost containment assumption → exceptions per 100 cases, minutes per exception, QA failure rate

Risk boundedness assumption → traceability coverage, escalation SLA, control attestations





Comments