What Scaling Means When You Can't Keep Hiring
A four-tier framework for deciding what still needs a human
This week’s post is a guest piece by Rory O’Brien. Rory was one of the first interviews I ran on FDE Hub and that conversation ended up being one of the most popular pieces I’ve published, so when he offered to write something original for the newsletter, it was an easy yes. Rory has spent the last decade building and scaling FDE and adjacent customer delivery teams, most recently as VP at HappyRobot and before that at Tonkean. The piece below isn’t about FDEs specifically, but it’s about the operating model that makes FDE work either valuable or wasteful, which is something I think about constantly. Over to Rory.
The playbook for scaling a team used to be pretty simple: revenue goes up, headcount follows, hire enough people to cover the volume, then hire more to manage the people you just hired. Everyone understood the math, boards expected it, and forecasts were built around it with almost no questioning of the underlying assumptions.
That model is broken, and I don’t think it’s coming back.
Let me tell you about a project I saw running at every enterprise I worked with over the last decade. None of this will be new to you.
They called it different things: company ontology, knowledge graph, single source of truth, shared data model. Different vocabulary, same project: get the entire organization operating from one common language. Product called them “customers.” Finance called them “accounts.” Customer Success called them “clients.” Legal called them “counterparties.” Four words for the same human being, living in four different systems, with four slightly different definitions of what counted as active, churned, or at risk.
The result: most of the coordination overhead in a large company isn’t strategic, it’s definitional. You’re not having a meeting to make a decision; you’re having a meeting to establish what the numbers even mean before anyone can start making a decision, then scheduling a follow-up because someone pulled a different number from a different system, and by that point the actual work has been sitting untouched for two weeks. This is the information layer that runs through humans because there was no other place for it to live.
AI (properly implemented) has changed the cost of fixing this dramatically. You don’t have to perfectly unify every system before you start getting value; you can build a working model of how your organization describes itself, connect it to your preferred AI provider, and the CS manager who used to spend three hours hunting down renewal data and the QBR template to copy from, can query it and create the deck in four minutes. The constant “hey, what’s going on with X” that accounted for 40% of most people’s communication overhead largely disappears. That’s the setup. Here’s the framework that actually determines whether any of it matters.
The four tiers. Embarrassingly simple, almost universally ignored in the right order.
1. AI it.
Try this first, always, even when you’re confident it won’t work. The attempt is the point, because even when it fails it usually tells you something useful about the underlying process that you didn’t have explicit language for before.
What this looks like in practice for anyone in post-sales or implementation work: drafting account health summaries, generating first-pass renewal risk assessments, summarizing call transcripts into action items, building the skeleton of a QBR deck, triaging incoming tickets/customer requests by urgency and category, writing the first draft of a playbook that a human then edits. The output isn’t always usable. An 80% draft that takes two minutes still beats a blank page that takes two hours, and more often than not, the 20% gap is specific and fixable rather than fundamental. When AI fails at a task, the failure is almost always informative: it’s pointing at the part of the process that was never clearly defined in the first place.
The biggest mistake here is treating “it didn’t do it perfectly” as a verdict. It’s not a verdict; it’s a data point about the process.
2. Automate it the old way.
Rules, triggers, scripts, workflows. Pre-2023 thinking, still valid and still underused for anything deterministic. If the output of a process is predictable given a set of inputs, a human should not be the mechanism that connects them.
Most organizations have already paid for this capability and aren’t using it. If a support ticket comes in with a specific keyword from a customer in a specific segment, it should route automatically. If an account’s health score drops below a threshold, something should happen without anyone manually noticing and then manually deciding to act. If a contract hits a 90-day renewal window, the motion should start on its own. These are not complex problems; they’re rules that nobody built because defaulting to a human was easier than writing the rule once.
The 90% of SaaS tools you’ve already bought are effectively capable of doing this, or were designed to. Salesforce workflows, HubSpot sequences, Zendesk triggers, Gainsight playbooks. The access is there. The will to sit down and configure it usually isn’t. That’s the actual gap, and it’s a choice gap, not a technology gap.
And increasingly, tier one can handle the setup and configuration of tier two for you, which blurs the line between them.
3. Write a better SOP.
If a human genuinely has to do it, at least make it repeatable, trainable, and documented well enough that the next person doesn’t have to rebuild it from memory. The teams that are hardest to automate are almost always the ones that never wrote anything down; you can’t AI a process you haven’t described, and you can’t automate a workflow that exists entirely inside one person’s head.
These SOPs matter beyond the immediate documentation value: they’re the training data for whatever comes next. Every process you document today is a process you can hand to an agent later with something approaching confidence. The teams skipping this step aren’t just making their lives harder now; they’re making the transition to tier one harder when it becomes unavoidable.
4. Throw a person at it.
Hire someone, add headcount, default to humans. This is the right call for a narrow set of things. Judgment calls where someone needs to be personally accountable. Relationships where trust is the product. Situations where the context is too specific and too recent for any system to have it. An enterprise renewal going sideways because of an executive relationship problem is a tier-four problem. A QBR deck is not.
The problem isn’t tier four existing; it’s that it became the default for everything, including work that isn’t complex, high-stakes, or relationship-dependent. Tiers one and two were either never seriously attempted, or when they were, expectations were miscalibrated and solutions were designed by humans who assumed human involvement was necessary, which is a surprisingly hard assumption to challenge when you’re the human doing the designing.
For most of the last decade, tiers two through four were the real options, and the mature move was knowing which tier a problem belonged in. That’s not quite the situation anymore. Tier one is now a mandate, coming loudly from every CEO in every all-hands, which is not a strategy; it’s a mandate, and mandates without methods don’t produce what the people issuing them want. The organizations that actually get there are the ones where individuals built the habit before anyone told them to. Tier two is still valid and underused, especially now that the technical barriers to accomplishing them are effectively non-existent. Tiers three and four are becoming the exception, and the only work that should land there is work that has genuinely exhausted the first two.
I think this is where it’s heading: the healthiest org you can show an investor is one where 70% or more of your headcount is built around humans (employees) talking to other humans (customers/suppliers). The remaining 30% is the rest of the org, and even that number gets blurry, because as AI absorbs more of the operational and analytical work, roles like engineering, marketing, and product naturally spend more of their time interfacing with customers too. The metric I’d start watching: what percentage of your people woke up today and actually talked to a customer? That number should be going up.
The scaling playbook isn’t gone; it’s just running on different assumptions. Start stress-testing AI against your own work, not because your company told you to, but because every attempt builds a reflex that’s going to matter for the rest of your career. Even when it fails, especially when it fails, you learn something about the process you didn’t know before. That muscle comes from repetition, not mandates.
That’s pretty much the whole job now.
Rory O'Brien is an advisor and fractional CXO for seed to Series B startups, with 15+ years scaling post-sales and customer experience organizations. He previously served as VP of CX at HappyRobot and Tonkean, where he built and scaled FDE/Implementation, deployment strategy, and customer experience teams. You can find him on LinkedIn, Substack, and X.




