What FDEs Actually Do All Week
Ten clients, fifty hours, and a lot of context-switching.
Since I started writing this newsletter, I’ve had dozens of people reach out saying they love what I’m sharing about the FDE role. The most common request? Tell us more about the day-to-day. The actual experience. What does a week actually look like?
Fair enough. Here’s last week. Ten clients, fifty hours, and enough context-switching to make your head spin.
A quick note: at larger companies like Palantir or OpenAI, FDEs typically go deep on one or two clients at a time. At a startup like Lleverage, I’m running parallel deployments across many clients simultaneously. It’s a different version of the role: less depth, more pattern recognition across contexts. Both are valid. This is the startup FDE experience
Monday: Planning the Chaos
The week starts with a planning session. Me, the other FDEs, and our head of customer delivery sit down for an hour to review every active client and set priorities for the week.
This sounds corporate. It isn’t. It’s survival.
When you’re working with ten clients in parallel, you need to know which fires are smoldering before they become infernos. Which PoC is about to hit a deadline. Which client has been quiet for too long, which usually means they’re stuck and haven’t told us yet. Which prospect needs a scoping call before they lose momentum.
We use Linear to track everything. Each client has their own project, their own set of tasks, their own status. Without this, I’d be lost within a day.
After planning, the rest of Monday is a blur of calls. Status updates with existing clients. A pre-sales call with a prospect trying to figure out if what they want is even possible. By the time I look up, it’s 6pm and I haven’t written a single line of code.
This is more common than I’d like.
The Two Active Builds
Right now I have two clients in active build mode.
The first is a document processing pipeline for an insurance company. We’re about 80% done, which sounds close to finished but isn’t. The core workflow works. I can take their documents, extract the data they need, and output it in their required format.
But that 80% covers the happy path. Now comes the hard part.
Edge cases. Documents that don’t follow the expected format. Fields that are sometimes there and sometimes not. Handwritten annotations that the OCR struggles with. A customer who sends scanned PDFs at a resolution that makes extraction unreliable.
This week I spent hours on prompt tuning. Adding additional context to help the model understand what it’s looking at. Expanding the test sample size to catch failures we hadn’t seen yet. It’s not glamorous work, but it’s the difference between a demo that impresses people and a system that actually runs in production.
The second build is earlier stage. We proved the concept works on a handful of input-output pairs. Now we need to scale to 100+ pairs, which means the architecture that worked for the prototype might not hold.
This is the transition nobody prepares you for. A PoC is a magic trick. Production is plumbing. Moving between them often requires rethinking decisions you made when you were just trying to prove the thing was possible at all.
The Communication Sprawl
Here’s something nobody warned me about: I’m logged into dozens of Slack workspaces and Microsoft Teams instances.
Every client uses their own tools. We meet them where they are. So I have Slack for some, Teams for others. Each with their own channels, their own notification sounds, their own threading conventions.
Add to that our internal tools. Linear for project tracking. Notion for documentation. Google Docs for shared specs and proposals. Excel for data analysis and often as a source of data. Postman for API testing. Claude for everything from writing code to drafting client communications to thinking through problems.
Context-switching isn’t just about moving between different projects. It’s about remembering which client uses which tool, which channel is for urgent issues versus general updates, and where I left the conversation three days ago when I was last working on their problem.
When Things Break
Last week, Cloudflare went down.
Our systems depend on Cloudflare, which means when Cloudflare has problems, our clients have problems. And when our clients have problems, they message us.
Half my day disappeared into firefighting. Checking status pages. Sending updates to clients who were panicking. Trying to figure out if there was anything we could do to route around the outage. There wasn’t, but you still have to check.
This is the part of the job that doesn’t show up in planning documents. You can have the perfect week mapped out, then an upstream provider has an incident and suddenly you’re in reactive mode. Deep focus work gets postponed. The code you were going to write waits another day.
We have monitoring in place to catch these issues early. That helps. But it doesn’t eliminate the disruption.
The 50/50 Split
If I’m honest about time allocation, it’s roughly 50% meetings and 50% building.
The meetings include: client check-ins, internal planning, pre-sales calls, scoping sessions, status updates, and the occasional firefighting call when something breaks.
The building includes: actual coding, prompt engineering, testing, debugging, and feeding insights back to our product team about what primitives we need.
I try to protect Tuesdays and Thursdays for deep work. Four-hour blocks where I can actually make progress on complex problems. It doesn’t always work. A client escalation or an urgent prospect call can blow up any day. But when I do get those blocks, that’s when the real work happens.
The short gaps between meetings, the 30-60 minutes here and there, those are for small fixes and maintenance. You can’t architect a system in 45 minutes. But you can tweak a prompt, respond to a client question, or review test results.
The People Part
Most of my conversations are with business stakeholders, not technical people.
This surprised me at first. I expected to spend more time with engineers. But in practice, I talk to technical people mainly when I need access to systems or help with integrations. The day-to-day work is with operations managers, department heads, and executives who are trying to figure out what AI can actually do for their business.
This means a lot of translation. Explaining what’s possible. Managing expectations about what AI agents can and can’t do. Redirecting requests from “we want an AI that figures everything out automatically” to “let’s start with these specific inputs and outputs.”
We found ourselves repeating the same explanations so often that we built a playbook. A document we share with every client that covers: what data we need from them, how we communicate, how they should test and give feedback, and how to provide SOPs and business rules.
That last part is crucial. AI agents aren’t magic. They need context. They need examples. They need to understand the rules of your business before they can automate anything. The playbook sets that expectation before we write a single line of code.
Friday: Internal Focus
By Friday, the client calls slow down and the internal work ramps up.
This is when we do team syncs, retrospectives, and product feedback sessions. All the things that matter for the long term but get squeezed out when client work is intense.
The product feedback loop is one of the parts of this job I find most valuable. When you’re implementing the same patterns across multiple clients, you start to see what should be abstracted into the platform. I spend time with our product team talking through which primitives we keep building from scratch, which means they should probably exist as features we can configure instead of code we write every time.
What This Role Actually Requires
Looking at last week, a few things stand out.
First: you have to be comfortable with chaos. Ten clients, different tools, different stages, different problems, constant context-switching. If you need long stretches of uninterrupted focus to be productive, this role will break you.
Second: the technical work is real but fragmented. You’re not spending eight hours a day writing code. You’re spending two hours here, four hours there, and the rest in conversations that shape what that code needs to do.
Third: the business side is unavoidable. If you just want to build things and not talk to people, this isn’t the role. Most of my week is communication. Translating requirements, managing expectations, navigating stakeholder dynamics.
Fourth: you carry full context. One FDE per client. There’s no handoff to a delivery team. What you scope is what you build. This is exhausting but also clarifying. You can’t overpromise because you’re the one who has to deliver.
The Honest Version
If I had to name the thing that defines this role, it’s this: the work is less about building AI systems and more about understanding businesses well enough to know what to build.
The technical skills matter. You need to be able to code, to understand how LLMs work, to debug systems under pressure. But the differentiator is pattern recognition across messy human organizations. Seeing what’s actually blocking a client, which is rarely what they say is blocking them. Knowing when to push back and when to just build what they asked for.
Every week is different. Every client is different. The only constant is that my carefully planned Tuesday deep work block will probably get interrupted by something.
That’s the job.



