The pattern that fails over and over
Most small businesses I meet have already bought one or more AI subscriptions before we talk. ChatGPT Team. Microsoft Copilot. A niche vertical tool a vendor sold them at a conference. They can name the products. They cannot name the specific workflow each one was supposed to improve.
That is the failure pattern, and it has nothing to do with the model. The cause is order of operations: tools were chosen before workflows were defined, so the tool has nothing concrete to attach to.
The result is predictable. License costs, no measurable change in operations, and quiet abandonment within a quarter. The business ends up with a story about "trying AI" that was never actually a fair test, because the test was never set up.
The research has been saying this for years
This is not a hot take. The data has been consistent for several years across multiple analyst houses and academic studies.
Gartner has reported that a substantial majority of enterprise AI projects either fail to reach production or fail to deliver expected business value. McKinsey's annual State of AI surveys have repeatedly shown that while AI adoption has climbed steadily, the share of companies reporting meaningful EBIT impact from AI at scale remains a small minority of adopters. BCG's research on AI value capture distinguishes a small group of "top performers" from a much larger group of "laggards" almost entirely on the basis of whether AI was integrated into operational workflows or treated as a standalone capability.
The pattern across this body of research is consistent: the differentiator is not the model, the vendor, or the budget. It is whether the AI was placed inside a defined workflow with defined success criteria.
What "workflow-first" actually means
Workflow-first does not mean spending six months on consulting deliverables before anyone touches a tool. It means three concrete things you can do in a week.
1. The workflow is defined on paper
Before AI is added, the workflow itself has to be writable. Inputs, outputs, owners, exceptions, and approval points are documented. If you cannot draw the workflow on a whiteboard in fifteen minutes, the workflow is not yet defined. Adding AI to an undefined process just makes the undefined parts run faster.
A useful test: ask two people who own different pieces of the workflow to describe it independently. If their descriptions disagree about who hands off to whom, the process is not yet ready for automation.
2. The bottleneck is identified
Where does delay or rework actually accumulate today? Not where it feels slow. Where the data shows it is slow.
In a typical SMB, the bottleneck is rarely where leadership thinks it is. The lead-handling complaint is often a drafting bottleneck, not an intake bottleneck. The quoting complaint is often a pricing-rule bottleneck, not a writing bottleneck. AI applied to the wrong step produces no measurable change because the wrong step was not actually slowing things down.
Spend a week measuring before you choose. Time stamps on inbound leads. Cycle times on quotes. Reply latency on customer email. If you are guessing the bottleneck, you are also guessing the value of fixing it.
3. The validation rule exists before the prototype runs
Before any AI model touches the workflow in production, there is a written rule for what acceptable output looks like and where a human signs off. This is the part most teams skip. It is the part that determines whether the rollout produces value or new failure modes.
Validation rules sound bureaucratic. They are not. They are the thing that prevents AI from producing fluent, plausible, and quietly wrong outputs that erode customer trust over six months.
A 5-step exercise you can run this week
You do not need a consultant for this part. You need a meeting room, your team, and ninety minutes.
- Pick one workflow that ends with a customer outcome: a reply, a quote, a confirmation, a weekly report.
- Map it on a whiteboard with your team. Inputs, outputs, owners, decision points, exceptions.
- Mark the bottleneck. The single step where delay or rework most often shows up. If you cannot identify it from data, measure for a week first.
- Define acceptable output for that step. What does "good" look like? What does "obviously wrong" look like? Who decides?
- Identify the human approval point. Where is the gate before something goes to a customer or to a number that drives a decision?
Those five answers are the input to any honest AI implementation conversation. Without them, no tool selection is going to matter.
What separates the implementations that work
The implementations we have seen succeed in Louisiana SMBs share a profile.
- The workflow was already documented before the AI conversation began.
- The metric was defined before the prototype was built.
- There was a real human responsible for the workflow before the AI. The same human remained responsible after.
- The first deployment touched one bounded task, not the whole workflow.
The implementations that quietly failed shared the opposite profile: the AI was the project, the workflow was an afterthought, and ownership was ambiguous because the tool was supposed to be the owner.
The differentiator is rarely the model. It is whether someone on the team can describe in writing what the AI is supposed to do, what acceptable output looks like, and who is on the hook when it does not.
What to do after the workflow is defined
Once the workflow is defined, the bottleneck is identified, and the validation rule is written, then tool selection becomes a real conversation. You can evaluate whether a model is appropriate for the bounded task you have defined. You can compare outputs against your acceptance criteria. You can measure whether the tool actually moves the metric you said you wanted to move.
That sequence is unglamorous. It is also the only sequence that produces results that survive past quarter one.
Where Geaux Digital Media fits
This is the work we do. The AI Workflow Review is a structured review of one workflow you nominate. We map the workflow, identify the bottleneck, score readiness against a defined rubric, and respond with a practical first step, or tell you the workflow is not yet ready to automate. Either answer is useful, and either answer saves you the cost of a tool you would have abandoned in a quarter.
If you have already bought tools and are not sure they are doing anything, that is a fair starting point too. The review does not assume you started from zero. It assumes you have a real workflow with real friction, and you want to know whether AI is the right intervention here.
The discipline behind this comes directly from our process: a nine-step implementation method built from safety-critical engineering practice. Every step has explicit entry and exit criteria. Nothing scales before it works.
Further reading
Brent Dorsey is the founder of Geaux Digital Media and a Senior Systems & Software Engineer with 20+ years across Marine Corps technical systems and DO-178C avionics software for Boeing, GE Aviation, BAE Systems, and RTX. Geaux Digital Media helps Louisiana small businesses implement AI workflows that are defined, validated, and measured before they scale. Request an AI Workflow Review →