Deal Closed. Now What?
The discovery phase that makes or breaks everything that comes after.
At Lleverage, we aim to go from deal closed to a first working version in two weeks. A version that processes real data and generates real feedback, even if it’s rough around the edges. Sometimes it takes longer. Sometimes less. But that target shapes everything about how we approach delivery, because we’ve learned the hard way that the longer you wait to get something in front of real users, the harder the project becomes.
These first two weeks are mostly not about AI. They’re about understanding a business well enough to know where AI actually helps.
The first meeting is 80% listening
When we kick off a new engagement, the instinct is to demo what we’ve built, show off the technology, get people excited about what’s possible. We’ve learned to resist that.
The first meeting is almost entirely about listening. We’re not there to talk about AI. We’re there to understand how the business actually runs. What does the process look like end to end? What are the inputs? What are the outputs? Where do things break? What happens when they break?
For every hour we spend listening in this phase, we save days of rework later.
The most important dynamic to recognise early is the gap between how leadership describes a process and how the team actually executes it. This gap is always bigger than you expect. Leadership will describe the idealised version, the clean workflow they designed or approved. The operators on the ground have a different reality entirely.
In one project, we discovered that leadership had no idea about half the functionality baked into their own ERP system. The operators had figured out features and workarounds that nobody in management knew existed. They had personal spreadsheets tracking things the system was supposed to handle. They had mental rules for edge cases that had never been documented anywhere.
This is why you talk to the people who actually do the work, not just the people who approved the budget.
Mapping the real workflow
Once we’ve listened, we start mapping. And by mapping, I mean walking through every single step of the workflow we’re planning to automate. Every input, every output, every exception, every edge case we can find.
The key question we always ask is: “If this works perfectly, what changes for your team day to day?” The answer tells you what success actually looks like in their world, not in yours.
This is also where you discover where time is genuinely being wasted versus where people think it’s being wasted. These are often different things. A leader might point to invoice processing as the bottleneck because it feels slow, but the real time sink might be the manual data reconciliation that happens afterwards. If you automate the wrong step, you’ll deliver something technically impressive that doesn’t move the needle.
And this is where scope management starts mattering. The “can you also...” requests begin almost immediately. Our response is straightforward: we can do a lot, but you need to pick the things that matter most to you right now. Not because we’re being difficult, but because we need to get a first version live as quickly as possible.
This is often the hardest thing to explain to clients. We’re not trying to ship fast because we’re cutting corners. We’re trying to ship fast because a live solution generating real outputs at volume teaches us more in a week than months of planning ever could. If we keep discovering edge cases one at a time during workshops, we’ll be in the discovery phase forever. But if the system is processing real work, those edge cases surface naturally and at scale.
The IT partner problem
Almost every mid-market European company we work with has an external IT partner managing their ERP, their CRM, or their core systems. This is a reality of the market that a lot of AI content ignores entirely.
These IT partners control the integrations you depend on. If they’re responsive and engaged, the project moves. If they’re not, the project stalls. There’s almost no middle ground.
We learned this lesson the hard way. On one project, the sales process had assumed the client’s ERP was cloud-based with API access. When we got into delivery, we discovered it was on-premises with no API at all. And it wasn’t a standard system. It was a one-of-one, custom-built ERP maintained by a single person.
We got lucky. That person was responsive, keen to collaborate, and we had API access within days. But on other projects, we’ve been stuck waiting on external partners who had no incentive to prioritise our integration work. They had their own timelines, their own clients, and our requests sat in their queue.
The lesson: reach out to the IT partner immediately. Not in week two. Not after you’ve finished discovery. Day one if you can. Be specific about what you need: API access, data formats, test environments. Vague requests get vague timelines. And if the partner is a blocker, you need to know that as early as possible so you can find workarounds or reset expectations with the client.
We’ve also learned to do better scoping during the sales process itself. If the deal depends on a specific integration, verify it before signing. Don’t assume.
Getting to live
When we talk about getting a first version live, we don’t mean shipping a polished product. We mean getting the AI system running against real data, either alongside the existing process or replacing it with a human in the loop who can catch and correct mistakes.
The first outputs will be rough. We communicate that upfront, clearly and repeatedly. The goal isn’t perfection. The goal is feedback at volume.
We always set up evaluation criteria before the system starts running, not after. What does a good output look like? What does an acceptable error rate look like? How will we measure whether this is working?
Then we run real cases through the system. The edge cases you didn’t anticipate will surface immediately. This is a feature, not a bug. Every edge case caught now is one that won’t blindside you in production later.
Client feedback in this phase is gold, but you need to structure it. “Does it look good?” isn’t a useful question. “Is this output correct? If not, what specifically is wrong?” gets you somewhere. We set up feedback loops that are specific, low-friction, and regular.
Then we adjust. Tighten guardrails. Improve prompts. Add handling for the edge cases that surfaced. Run it again. This cycle happens daily, sometimes multiple times a day.
Why speed matters
There are two failure modes we’ve seen play out repeatedly.
The first is obvious: you skip the discovery phase, jump straight to building, and ship something fast that nobody uses. The solution doesn’t match the real workflow because you never mapped the real workflow. The operators reject it because they weren’t consulted. The champion who bought the project loses credibility internally.
The second is less obvious but just as dangerous: you spend too long in discovery and planning, and the project loses momentum. The client starts wondering what they’re paying for. The internal champion has to keep justifying a project that hasn’t shown results. The scope keeps expanding because there’s no live system to anchor decisions around. By the time you finally deliver something, expectations have inflated beyond what any first version could meet.
The two-week target forces a discipline. You can’t boil the ocean in two weeks. You have to make choices about what matters most. You have to get comfortable with imperfection. And you have to bring the client along with you on that, which means building trust through transparency rather than polished demos.
What this phase really is
The first two weeks of an AI implementation are not a technical challenge. They’re an organisational one. You’re learning how a business works, building relationships with the people who will use your system, navigating dependencies you don’t control, and establishing the feedback loops that will determine whether the project succeeds or quietly dies.
The companies that get the best results are the ones who engage fully in this phase. They explain their processes honestly, challenge our assumptions, share the messy spreadsheets and undocumented workarounds. They treat us as a partner, not a vendor.
Every shortcut you take in these two weeks costs you weeks later. Every conversation you skip comes back as a feature gap. Every dependency you don’t surface becomes a blocker at the worst possible time.
The AI part, frankly, is the easy bit.



