AUTOMOAT — FILED ON THE RECORD
HOME / RESOURCES / FRAMEWORKS / THE AI READINESS AUDIT
FRAMEWORK

The AI Readiness Audit.

8 dimensions, scored 1–5. The instrument that produces the readiness scorecard.

The same framework powers the self-assessment calculator and the formal Sweep deliverable. Surfaces deal-stoppers before an engagement starts.

THE DIAGRAM
#
DIMENSION
WHAT IT MEASURES
01
Business stage
Pre-revenue vs. established vs. institutional — fit varies sharply.
02
AI spend
Existing license budget signals appetite and what's already been tried.
03
Measured lift
What you can prove the existing stack has produced.
04
Operational pain
Whether there's a specific bottleneck or vague growth desire.
05
Executive sponsor
CEO/COO sponsorship vs. delegated to IT — kills or makes the install.
06
Instrumentation
Can you measure the workflow today? If not, we can't commit to it.
07
Investment appetite
Capacity to put real capital into the operating layer this year.
08
Timeline
Near-term vs. someday. The math doesn't favor someday.
HOW TO READ IT

Each dimension is scored 1–5 during the Sweep based on interviews, document review, and operational observation. The aggregate score predicts engagement fit; the per-dimension scores predict which specific risks the Install plan needs to design around. A high aggregate with low scores in two dimensions is almost always a more useful diagnostic than a flat-average mid-score.

HOW WE USE IT IN ENGAGEMENTS

The Sweep produces a one-page scorecard per dimension with the score, the specific failure mode driving it, and the recommended Install design adjustment. The same framework, in compressed form, powers the public self-assessment calculator. The dimensions are deliberately mechanical — they surface the deal-stoppers (no exec sponsor, no instrumentation, no investment appetite) that kill engagements that otherwise look good on paper.

WORKED EXAMPLE

Reading a 3.4-average scorecard.

A consulting firm we audited scored 3.4 average across the 8 dimensions. On the surface that's a probable fit. But the per-dimension breakdown told a different story: 4-5 on business stage, AI spend, operational pain, and investment appetite — but 1 on instrumentation (they could not measure the workflows we'd target) and 2 on executive sponsor (delegated to IT, not the managing partner).

The diagnosis: averaging produces a probable-fit signal that the per-dimension breakdown contradicts. We declined the engagement until the managing partner sponsored it directly and the firm could produce baseline KPIs on the workflows in scope. Three months later they came back with both. We started the Sweep then.

MORAL: AGGREGATE SCORES LIE. PER-DIMENSION TELLS THE TRUTH.

Book an Ops Call.

30 minutes. Operator-to-operator. No deck. No follow-up nurture sequence designed to wear you down.

Book an Ops Call →