The wrong starting point
The most common first question I hear from a Louisiana small business owner is "which AI tool should we use?" Sometimes it is "should we get a chatbot?" Sometimes it is "should we add an AI agent?" Sometimes it is a screenshot of a vendor demo a friend forwarded.
That is the wrong starting point. The tool is not the first constraint. The foundation is the first constraint.
The AI projects we have seen succeed in small businesses share a profile. The workflow was already documented before the AI conversation began. There was a clear human who owned approving the output. There was a way to compare AI output against twenty known-good cases. The data the workflow needed was in one place, not five. And the team had a written rule for when to pause the AI and review.
The projects that quietly failed shared the opposite profile. The AI was the project. The workflow was an afterthought. Ownership was ambiguous because the tool was supposed to be the owner. There was nothing to validate AI output against, and nobody whose job it was to validate it. The data the AI needed was scattered across an inbox, a CRM, two spreadsheets, and one senior employee's head.
The pattern is so consistent that we built a diagnostic to catch it before the engagement starts. We call it the AI Readiness Gate.
What the AI Readiness Gate is
The AI Readiness Gate is a five-domain diagnostic we run before any implementation work. It is not a generic "are you ready for AI" survey. It is not a vendor or model comparison. It is a decision gate with four possible outcomes that determine what happens next:
- Do not automate yet
- Cleanup first
- Pilot ready
- Build ready
The gate is deterministic. The same answers always produce the same outcome. There is no judgment call hidden in the scoring. We can show you the rules and you can apply them yourself.
It is also fast. The intake takes 10–14 minutes. We score five domains, identify the weakest one, and recommend a path. If the answer is "do not automate yet," we tell you that. If the answer is "you are ready to pilot one workflow," we tell you that and define the workflow boundary and the validation rule before any tool is selected.
The gate exists for one reason: applying AI to an undefined workflow with no validation set and no approval owner produces fluent, plausible, and quietly wrong output that erodes customer trust over six months. Validation is not optional. It is the only thing that prevents the failure mode.
The five domains
The domains are intentionally generic and SMB-friendly. They are not borrowed from any third-party readiness framework — they reflect what we have learned actually predicts success in Louisiana small businesses with fewer than fifty employees.
1. Business Value Readiness
Is there a defined desired outcome? A real bottleneck, stated in operational terms? A budget and a timeline that are realistic, not aspirational?
The signal is simple. If the desired outcome is "transform our business with AI," that is not a measurable outcome. If it is "reply to inbound leads within thirty minutes during business hours," that is. If the bottleneck is "things are slow," there is nothing to automate. If it is "quote turnaround averages four days because the office manager has to gather data from three systems," there is.
Business Value Readiness is low when the project is being driven by curiosity or vendor pressure rather than a workflow that is genuinely costing the business time or revenue today.
2. Workflow Readiness
Can the workflow be drawn on a whiteboard in fifteen minutes? Are the inputs, outputs, owners, and exception paths documented? If two staff members described the workflow independently, would they agree on who hands off to whom?
A workflow that is not yet defined cannot be automated. AI applied to undefined work just makes the undefined parts run faster, in a more polished tone, with a higher monthly subscription cost. The customer experience does not improve.
Workflow Readiness measures whether the process exists on paper, not just in someone's head.
3. Knowledge / Data Readiness
Is there a single source of truth for the data this workflow uses? What is the data quality? Where does the institutional knowledge live? And critically: are there twenty known-good examples that we can use to validate AI output against before it goes live?
Validation discipline only works when there is something to compare against. "It looks right" is not a validation set. Twenty real, hand-vetted examples of acceptable output is. If those examples do not exist today, capturing them becomes the first task of the engagement.
This is the domain where most small businesses underestimate themselves. They have the knowledge — it just lives in inboxes, chat logs, and the head of the senior staff member who has been there longest. Knowledge / Data Readiness measures whether that knowledge is reachable in a form an AI system can actually use.
4. Technical / Security Readiness
Are the systems wired together, or wirable? Is data sensitivity classified and is there a documented handling rule for each tier? Are permissions explicit?
This is not about whether you have impressive software. It is about whether the AI can reach the data the workflow needs without manual export-and-paste, and whether sensitive data is contained behind appropriate controls. A regulated business with PHI or PCI data has a higher bar here than a residential service business with public-style data — and that is fine. The bar matches the data.
Technical / Security Readiness is the domain that determines whether automation is *operationally* possible at the current state of your stack, not just *theoretically* possible.
5. Governance / Monitoring Readiness
Is there a named human who owns approval? Is there a weekly review cadence? Is there *any* error tracking today, even if it is just a shared spreadsheet?
This is the domain that separates implementations that survive the first quarter from implementations that quietly drift. AI output drifts over time. Customer-facing tone shifts. New edge cases emerge. The validation gates only stay in place if someone is responsible for keeping them there.
Governance / Monitoring Readiness measures whether the discipline exists today to keep AI honest after launch. Without it, the rollout produces value for thirty days and creates new failure modes for the next twelve months.
The weakest-domain rule
The five domain scores are not averaged. The weakest domain determines the recommended path.
This is the most important rule in the gate, and the one most owners push back on initially. The intuition is that high scores in some domains should compensate for low scores in others. They cannot.
- A strong use case cannot compensate for bad data. AI output trained on or grounded in messy data produces messy results that look polished. The polish is what makes them dangerous.
- Good technical tools cannot compensate for poor ownership. If nobody is responsible for reviewing output, the validation gate evaporates within a quarter regardless of how nice the tooling is.
- Strong demand cannot compensate for no validation examples. Without a known-good set to compare against, "good output" becomes "output that looks right to whoever is reviewing it that day."
- A documented workflow cannot compensate for regulated data with no handling rules. The compliance failure mode is binary, not gradient.
So the gate looks at the lowest score across the five domains. If any single domain is critically low, the whole engagement pauses for foundation work. No amount of strength elsewhere overrides it.
The four readiness outcomes
The combination of total score and weakest domain produces one of four outcomes.
| Outcome | Rule | What happens next |
|---|---|---|
| Not Ready | Any one domain scores ≤ 1 | Do not automate yet. Document the foundation work needed before any AI is added. |
| Cleanup First | Weakest domain ≤ 2 | One- to two-week cleanup sprint focused on the weakest domain. Get it to 3+ before AI work begins. |
| Pilot Ready | Total ≥ 65 AND weakest ≥ 3 | Two-week pilot on a single bounded workflow, validated against twenty known-good cases, with a 30-day measurement plan. |
| Build Ready | Total ≥ 80 AND weakest ≥ 4 | Multiple workflows can be sequenced through the same validation discipline. The bottleneck is implementation capacity, not readiness. |
The rules are evaluated top to bottom, first match wins. If no rule matches but readiness is reasonable, the default is "cleanup first" — we do not advance a workflow into a pilot unless the criteria are explicitly met.
This is the same discipline that comes from safety-critical systems engineering. Define the entry criteria. Score against them. Do not let a workflow into the next phase if the criteria are not met. The discipline is unusual in the AI consulting market. It is the reason our implementations produce measurable results instead of impressive demos that get abandoned in a quarter.
What you receive after scoring
The output of the gate is not a generic AI plan. It is a diagnostic document that tells you, specifically:
- Readiness score across the five domains, plus the total out of 100.
- Weakest domain identified — the constraint that determines what to fix first.
- Recommended path: do-not-automate-yet, cleanup sprint, single-workflow pilot, or multi-workflow build.
- Readiness roadmap — concrete foundation work to do before (or alongside) any AI implementation, ordered by impact.
- Do-not-automate-yet list — what must stay human-approved at the current readiness level. Pricing, refunds, legal/medical/financial claims, and anything customer-facing during the validation window are always on this list regardless of score.
- Validation gates — written acceptance criteria for the workflow you are about to automate. The bar is twenty known-good cases at a defined accuracy threshold.
- Human approval rules — who signs off on what, and at what cadence. After thirty days of clean output, specific bounded message types can move from "human approves" to "auto-send." Not before.
- 30-day metric — one quantitative success measure (median response time, conversion rate, error rate, hours saved) chosen before launch. No metric, no scale.
That document is the input to any honest implementation conversation. Without it, no tool selection is going to matter.
Why this matters for Louisiana small businesses
The readiness gate is built around the operational reality of small businesses with lean teams and owner-dependent knowledge — which is most of the businesses we work with across Greater New Orleans, the Northshore, Baton Rouge, Lafayette, and the rest of the state.
In a typical Louisiana SMB:
- Teams are lean. The office manager wears six hats. The owner is in every meaningful conversation. There is no operations-engineering function whose job it is to absorb the cost of a failed AI rollout.
- Knowledge is owner-dependent. Pricing rules, customer history, escalation logic, and seasonal exceptions live in the head of the senior employee who has been there for a decade. When that person is on a service call, the workflow stalls.
- Tools are messy. A typical stack has a CRM that is half-populated, an inbox-as-database, QuickBooks, a scheduling tool, and a vertical-specific app from a trade conference. They do not talk to each other. Data is reconciled manually each Monday.
- Handoffs are undocumented. Steps that are obvious to long-tenured staff are invisible to anyone new. SOPs exist on paper that nobody opens.
- Customer communication failures are expensive. Bad replies become refunds. Refunds become reviews. A bad review for a service business in a Louisiana market with strong word-of-mouth can outweigh a quarter of marketing spend.
- Practical automation matters more than AI theater. A demo that wows a conference room does not survive Monday morning when the schedule is full and the office manager is on the phone.
The readiness gate is built for that environment. Most businesses we score come back as cleanup first or pilot ready — and the cleanup work is not glamorous. It is consolidating to one source of truth, capturing twenty known-good examples, naming an approval owner, and writing down the SOP that has been tribal knowledge for years. That is the unglamorous work that makes AI implementation actually pay off when it lands.
When the foundation is in place, AI becomes a multiplier. When it is not, AI becomes a faster way to drop balls.
What to do next
If you are evaluating AI for your business, do not start with tool selection. Score readiness first. Then design the workflow. Then choose the constrained tool. Then validate. Then measure. Then scale only after the metric improves.
That sequence is the difference between an AI rollout that survives past the first quarter and one that quietly gets abandoned. We have seen it on both sides enough times that the readiness gate is now the front-end diagnostic of every engagement.
Where Geaux Digital Media fits
We run the gate as part of every engagement. It takes 10–14 minutes of intake plus a 20-minute follow-up call to walk through the workflow you nominated. The output is a written diagnostic with the score, the weakest domain, the recommended path, the readiness roadmap, the do-not-automate-yet list, and the validation rules — for one workflow, in your language, scoped to your business.
If readiness is high, we run the implementation process — define, constrain, validate, measure, decide. If readiness is not high, we tell you that and walk through the cleanup work first. Either answer saves you the cost of a tool you would have abandoned in a quarter.
Browse practical use cases to see what bounded AI workflows look like in operation, or read more insights on workflow-first AI implementation.
The most useful thing we can do in a first conversation is honestly tell you whether your workflow is ready to automate. Often the answer is "not yet, and here is what to fix first." That is the answer the gate is built to produce.
Ready to score readiness for your business?
Two ways to start:
- [Request an AI Readiness + Workflow Review](/ai-workflow-review) — the four-step intake plus the 20-minute follow-up call. The output is your readiness diagnostic for one workflow.
- [See the AI Readiness Assessment](/ai-readiness-assessment) — the page that documents the five domains, the decision rules, and the four outcomes in detail.
Brent reviews each submission personally before any recommendation goes out. There is no auto-generated AI plan, no software pitch, and no "transform your business" deck.
Further reading
Brent Dorsey is the founder of Geaux Digital Media and a Senior Systems & Software Engineer with 20+ years across Marine Corps technical systems and DO-178C avionics software for Boeing, GE Aviation, BAE Systems, and RTX. Geaux Digital Media helps Louisiana small businesses implement AI workflows that are defined, validated, and measured before they scale. Request an AI Workflow Review →