Score readiness before you build
A five-domain assessment built for Louisiana small businesses. We score your readiness across business value, workflow, knowledge/data, technical/security, and governance/monitoring. The output is a category, a weakest constraint, and a recommended path — not a generic AI plan.
Most AI implementations fail before the model is even chosen
The pattern is consistent across analyst research and the implementations we have seen on the ground. AI is added to workflows that were never defined, validated against examples that do not exist, and deployed without an owner who can sign off when things drift.
The readiness gate is the front-end diagnostic that prevents this. Before we recommend a single tool, we measure whether the business has the foundation to safely run AI in production. Five domains. Each scored 0–5. Decision rules — not vibes — produce the category and the recommended path.
This is the same discipline that comes from safety-critical systems engineering. Define the entry criteria. Score against them. Do not let a workflow into the next phase if the criteria are not met.
Each domain has signals you can verify in a 15-minute review
The domain names are intentionally generic and SMB-friendly. They are not borrowed from any third-party readiness framework. The signals are what we look at when we score.
Business Value Readiness
Is there a defined desired outcome, a real bottleneck, and a budget/timeline signal that makes a real engagement plausible?
- ✓Outcome stated as a measurable sentence (time, dollars, or rate)
- ✓Bottleneck described in concrete operational terms
- ✓Budget and timeline are realistic, not aspirational
Workflow Readiness
Can the workflow be drawn on a whiteboard in 15 minutes? Are SOPs in place? Is there a named owner?
- ✓Inputs, outputs, owners, and exception paths are documented
- ✓SOPs exist and are current
- ✓Two staff members describe the workflow the same way
Knowledge / Data Readiness
Single source of truth, data quality, where institutional knowledge lives, and whether known-good examples exist for validation.
- ✓One clean source of truth for the data this workflow uses
- ✓20+ known-good examples available to validate AI output against
- ✓Knowledge lives in a documented source, not one person's head
Technical / Security Readiness
Are the systems wired together (or wirable) and is data sensitivity manageable with appropriate controls?
- ✓Required systems have integrations or accessible APIs
- ✓Data sensitivity is classified and handling rules are documented
- ✓Permissions and access controls are explicit
Governance / Monitoring Readiness
Is there a named approval owner? Is there any error tracking today? Is the org disciplined enough to keep validation gates in place after launch?
- ✓Named human approval owner with a weekly cadence
- ✓Errors and exceptions are tracked, not just discovered when they hurt
- ✓Validation gates and acceptance criteria are written down
Decision rules, not vibes
The category is determined by deterministic rules from the domain scores. You can read the rules below and apply them yourself. There is no judgment call hidden in the scoring.
Not ready
Recommended pathAt least one foundation domain is critically weak. AI will produce fluent, plausible, and quietly wrong output that erodes customer trust. Document the foundation work first.
Cleanup first
Recommended pathReal interest, real pain, but the foundation is not yet stable enough to safely automate. A 1–2 week cleanup focused on the weakest domain comes before any AI work.
Pilot ready
Recommended pathFoundation is good enough to run one bounded workflow through a constrained AI prototype. Validate against 20 known-good cases. Decide scale-or-stop after 30 days.
Build ready
Recommended pathAll five domains are strong. Multiple workflows can be sequenced through the same validation discipline. The bottleneck is implementation capacity, not readiness.
Order of evaluation: rules are evaluated top to bottom. The first match wins. If no rule matches but readiness is reasonable, the default is “cleanup first” — we do not advance a workflow into a pilot unless the criteria are explicitly met.
A diagnostic, not a sales pitch
- →Readiness score across the five domains, plus the total out of 100.
- →Weakest domain identified — the constraint that determines what to fix first.
- →Recommended path: do-not-automate-yet, cleanup sprint, single-workflow pilot, or multi-workflow build.
- →Readiness roadmap — concrete foundation work to do before (or alongside) any AI implementation.
- →Do-not-automate-yet list — what should stay human-approved at the current readiness level.
- →Validation gates — the acceptance criteria, approval owner, and measurement plan that come before scale.
The 12-minute deep-dive on why readiness comes before tool selection, what each domain measures, and what happens after scoring.
Ready to score readiness for your business?
The intake takes 10–14 minutes. Brent reviews each submission personally before any recommendation goes out.