Geaux Digital Media
AI Readiness Assessment

Score readiness before you build

A five-domain assessment built for Louisiana small businesses. We score your readiness across business value, workflow, knowledge/data, technical/security, and governance/monitoring. The output is a category, a weakest constraint, and a recommended path — not a generic AI plan.

Why a readiness gate

Most AI implementations fail before the model is even chosen

The pattern is consistent across analyst research and the implementations we have seen on the ground. AI is added to workflows that were never defined, validated against examples that do not exist, and deployed without an owner who can sign off when things drift.

The readiness gate is the front-end diagnostic that prevents this. Before we recommend a single tool, we measure whether the business has the foundation to safely run AI in production. Five domains. Each scored 0–5. Decision rules — not vibes — produce the category and the recommended path.

This is the same discipline that comes from safety-critical systems engineering. Define the entry criteria. Score against them. Do not let a workflow into the next phase if the criteria are not met.

The five domains

Each domain has signals you can verify in a 15-minute review

The domain names are intentionally generic and SMB-friendly. They are not borrowed from any third-party readiness framework. The signals are what we look at when we score.

01

Business Value Readiness

Is there a defined desired outcome, a real bottleneck, and a budget/timeline signal that makes a real engagement plausible?

Signals we look at
  • Outcome stated as a measurable sentence (time, dollars, or rate)
  • Bottleneck described in concrete operational terms
  • Budget and timeline are realistic, not aspirational
02

Workflow Readiness

Can the workflow be drawn on a whiteboard in 15 minutes? Are SOPs in place? Is there a named owner?

Signals we look at
  • Inputs, outputs, owners, and exception paths are documented
  • SOPs exist and are current
  • Two staff members describe the workflow the same way
03

Knowledge / Data Readiness

Single source of truth, data quality, where institutional knowledge lives, and whether known-good examples exist for validation.

Signals we look at
  • One clean source of truth for the data this workflow uses
  • 20+ known-good examples available to validate AI output against
  • Knowledge lives in a documented source, not one person's head
04

Technical / Security Readiness

Are the systems wired together (or wirable) and is data sensitivity manageable with appropriate controls?

Signals we look at
  • Required systems have integrations or accessible APIs
  • Data sensitivity is classified and handling rules are documented
  • Permissions and access controls are explicit
05

Governance / Monitoring Readiness

Is there a named approval owner? Is there any error tracking today? Is the org disciplined enough to keep validation gates in place after launch?

Signals we look at
  • Named human approval owner with a weekly cadence
  • Errors and exceptions are tracked, not just discovered when they hurt
  • Validation gates and acceptance criteria are written down
The four readiness categories

Decision rules, not vibes

The category is determined by deterministic rules from the domain scores. You can read the rules below and apply them yourself. There is no judgment call hidden in the scoring.

Not ready

Recommended path
Any one domain scores ≤ 1Do not automate yet

At least one foundation domain is critically weak. AI will produce fluent, plausible, and quietly wrong output that erodes customer trust. Document the foundation work first.

Cleanup first

Recommended path
Weakest domain ≤ 2Cleanup sprint first

Real interest, real pain, but the foundation is not yet stable enough to safely automate. A 1–2 week cleanup focused on the weakest domain comes before any AI work.

Pilot ready

Recommended path
Total ≥ 65 AND weakest ≥ 3Single-workflow pilot

Foundation is good enough to run one bounded workflow through a constrained AI prototype. Validate against 20 known-good cases. Decide scale-or-stop after 30 days.

Build ready

Recommended path
Total ≥ 80 AND weakest ≥ 4Multi-workflow build

All five domains are strong. Multiple workflows can be sequenced through the same validation discipline. The bottleneck is implementation capacity, not readiness.

Order of evaluation: rules are evaluated top to bottom. The first match wins. If no rule matches but readiness is reasonable, the default is “cleanup first” — we do not advance a workflow into a pilot unless the criteria are explicitly met.

What you receive after submitting

A diagnostic, not a sales pitch

  • Readiness score across the five domains, plus the total out of 100.
  • Weakest domain identified — the constraint that determines what to fix first.
  • Recommended path: do-not-automate-yet, cleanup sprint, single-workflow pilot, or multi-workflow build.
  • Readiness roadmap — concrete foundation work to do before (or alongside) any AI implementation.
  • Do-not-automate-yet list — what should stay human-approved at the current readiness level.
  • Validation gates — the acceptance criteria, approval owner, and measurement plan that come before scale.
Read the full explanation
The AI Readiness Gate: Why We Score Before We Build →

The 12-minute deep-dive on why readiness comes before tool selection, what each domain measures, and what happens after scoring.

Get started

Ready to score readiness for your business?

The intake takes 10–14 minutes. Brent reviews each submission personally before any recommendation goes out.