The Operational Logic Problem: Why AI Projects Take 4x Longer Than They Should
Your AI doesn't know what you forgot to tell it.
We deployed a sales order agent for a wholesale client. It worked. Orders flowed through. Everyone was happy.
Then we started getting error reports. Orders created with wrong delivery dates. Missing packaging specifications. Line items assigned to the wrong warehouses.
The client’s response: “Oh, we forgot to mention that.”
Forgot to mention that delivery dates have rules. That customers can’t always get what they request. That packaging methods vary by product type and destination. That warehouse assignment depends on inventory levels and geography.
Weeks of work. Not because the AI failed. Because nobody told it how the business actually operates.
This is the most expensive problem in AI deployment. And almost every company walks into it.
The Knowledge That Doesn’t Exist
When we start a project, I ask for documentation. SOPs. Process guides. Decision trees.
Sometimes they exist. Usually they’re outdated, incomplete, or describe how things should work rather than how they actually work.
More often, there’s nothing. The knowledge lives entirely in people’s heads.
This isn’t a criticism. Traditional companies, especially in logistics, wholesale, and manufacturing, built their operations over decades. Processes evolved organically. Experienced employees learned through apprenticeship, not documentation. Why write it down when Maria has been handling exceptions for fifteen years and just knows?
The problem: Maria’s knowledge is exactly what the AI needs. And Maria can’t articulate half of what she knows, because to her it’s obvious. It’s not obvious. It’s expertise compressed into instinct.
The Champion Problem
Here’s what typically happens.
A champion inside the company sees the potential for AI automation. They’re usually a manager or director. They understand the pain points at a high level. They can describe what the team does in broad strokes.
But they don’t do the work themselves.
They care about results. If the results are there, the process doesn’t matter. That’s reasonable management. It’s also terrible input for building an AI agent.
The champion says: “We process sales orders and enter them into our ERP.”
What we need to know: What fields get populated? Where does the data come from? What happens when the customer requests a delivery date you can’t meet? How do you decide which warehouse fulfills the order? What are the seven different packaging methods and when does each apply?
The champion doesn’t know these details. Not because they’re incompetent, but because it’s not their job to know. Their job is to ensure the team delivers.
The operator knows. The person processing those orders every day, making dozens of micro-decisions per hour, handling exceptions without even thinking about it. That’s where the knowledge lives.
The PhD Intern Framework
When we guide clients through extracting operational logic, we use a simple mental model.
Imagine you just hired an intern. This intern has a PhD-level education across almost every discipline. They can reason through complex problems. They learn incredibly fast. They never forget anything you tell them.
But they’ve never worked a single day in their life. Zero domain knowledge. Zero context about your company, your customers, your industry conventions, your unwritten rules.
How would you onboard this person?
You wouldn’t say “process the sales orders.” You’d walk them through everything. You’d explain why certain customers get priority. You’d show them how to handle the weird edge cases. You’d tell them about the supplier who always sends invoices in a non-standard format. You’d warn them about the product codes that look similar but mean completely different things.
That’s what the AI needs. Not your documentation. Your mentorship.
Why This Is Counterintuitive
Clients struggle with this framing. It feels wrong.
You don’t hire PhD candidates for data entry work. The people doing manual, repetitive tasks aren’t typically positioned as knowledge experts. There’s an unconscious assumption that if the work is “simple,” explaining it should be simple too.
It’s not. The simplicity is an illusion created by expertise.
Watch someone who’s processed ten thousand sales orders. They move fast. They make decisions without hesitation. They handle exceptions fluidly. It looks easy.
Ask them to explain what they just did, step by step, with every decision point articulated. They can’t. Not because they don’t know, but because the knowledge has become invisible to them. It’s muscle memory. It’s pattern recognition they can’t consciously access.
Extracting this knowledge is hard work. It requires patience, the right questions, and often multiple passes as forgotten details surface.
The 4x Multiplier
The difference between a well-scoped AI project and a disaster often comes down to operational logic extraction.
With good extraction upfront: four weeks to deployment, smooth iteration, quick wins.
With poor extraction: sixteen weeks of discovery through failure, constant rework, frustrated stakeholders, eroded trust.
The timeline difference isn’t about building. Building is fast. The difference is waiting for clarification. Discovering rules that should have been documented. Rebuilding features that were based on incomplete understanding.
Every week you spend in that feedback loop is a week you’re not delivering value.
What Actually Works
We’ve developed a structured approach to extraction. It’s still evolving, but here’s what we’ve learned.
Talk to operators, not just champions. The person doing the work daily has the knowledge you need. Champions can provide context and priorities, but operators provide the logic.
Find good mentors. Not every operator can articulate their process. Some have internalized it so deeply they can’t teach it. Others are natural explainers. Identify who can actually transfer knowledge, not just who has it.
Walk through the process end to end. Ask them to guide you from input to output. Cover the happy path first. Then explore branches. What happens when this field is missing? What if the customer requests something impossible? Where do exceptions go?
Describe the main outcome. What does success look like? Not in business terms, but in concrete terms. What fields are populated? What systems are updated? What notifications go out?
Identify time sinks. Where do they lose the most time? Why? These are usually the highest-value automation targets, and they often reveal the most complex logic.
Use structured frameworks. We ask clients to think in terms of rules and conditions. When does this apply? What triggers that decision? What needs to be true before you take action? Describe the happy path, then expected edge cases. This structure forces precision.
The Demo Provocation
Here’s something counterintuitive we’ve learned: sometimes you need to build something wrong to discover what’s right.
A quick, rough demo does something documentation requests can’t. It makes the gaps visible.
When a client sees an agent processing orders incorrectly, they suddenly remember all the things they forgot to mention. “Oh, it shouldn’t do that because...” followed by a rule that never appeared in any document or conversation.
The demo provokes memory. It surfaces tacit knowledge that questions alone can’t reach.
This isn’t ideal. It’s expensive in time and effort. But for complex processes with deeply embedded knowledge, it’s sometimes the only way to get complete information.
The Mindset Shift
AI should be managed differently from traditional software.
Software is installed. You configure it, integrate it, and it runs. The logic is predetermined. Your job is to set parameters.
AI needs guidance, context, and training. It needs to understand not just what to do, but why. It needs examples, not just rules. It needs feedback, not just error codes.
Customers who understand this from day one see the best outcomes. They approach AI deployment like onboarding a new team member. They invest time upfront in knowledge transfer. They expect iteration and refinement.
Customers who treat AI like software, expecting to flip a switch and have it work, consistently struggle. They underinvest in extraction. They’re surprised when the agent doesn’t know things they never explained. They blame the technology for gaps in their own documentation.
The Unsexy Work
Don’t start building until you understand the operational logic. Not fully, maybe 90%. But enough that you’re not discovering fundamental rules through production failures.
The time invested in extraction pays back multiple times in avoided rework, faster iteration, and better outcomes.
Your experts know more than they can articulate. Budget time and effort to help them surface that knowledge. It won’t happen in a single call.
The “obvious” stuff isn’t obvious. What’s clear to someone who’s done this work for years is completely opaque to an outsider, whether human or AI.
This is the unsexy work of AI deployment. Right now, there’s no shortcut. Someone has to do the hard work of knowledge transfer.
But I suspect this won’t last. The same AI that needs operational logic to function could eventually help extract it. Imagine an agent that watches screen recordings, asks clarifying questions, and builds its own understanding of a process. We’re not there yet, but the gap between “needs to be told everything” and “can figure some things out” is closing.
Until then, the only question is whether you do the extraction upfront, in weeks, or discover it through failure, over months.



