From Chaos to Calm: How One Aggregator Streamlined 5 Companies
This aggregator did was a lot of smart operators do. They bought five great businesses— but soon realized they were wrangling five separate tornadoes.

David Spivey

Blog >
From Chaos to Calm: How One Aggregator Streamlined 5 Companies
Last Updated:
1/22/26
The aggregator did what a lot of smart operators do. They bought great skilled trades businesses—but then got hit with the messy reality of running them as a portfolio.
Five companies. Five brands in different markets with different habits, and different “this is how we do it here” rules.
On paper, it looked like growth. In the office, it was more like wrangling five separate tornadoes.
Five highly successful companies. Five conflicting procedures.
None of the companies were broken. Each one could run on its own. The problems cropped up when leadership tried to manage them together. Reporting didn’t line up, customer experience was inconsistent, and every improvement effort turned into a debate about process.
Bain’s 2025 Global Private Equity report notes that many portfolio companies are still experimenting with generative AI, but “nearly 20%” have operationalized use cases and are seeing concrete results. The pattern is telling: the value shows up when AI stops being a pilot and becomes part of how the business actually runs.
This post is a composite case study based on what we’ve seen at The Graphite Lab across multi-brand operators in the trades. Names and details are changed, but the problems—and the fixes—are very real.
The portfolio problem nobody wants to admit out loud
The aggregator’s leadership team started with a familiar list of goals:
standardize operations without crushing local leadership
create visibility across the portfolio (so KPIs weren’t abstract)
reduce overhead creep as volume increased
improve speed-to-lead and customer experience in a way that actually stuck
They also knew they couldn’t run a massive two-year tech overhaul.
Each company already had its own stack. Their teams were already busy and the field techs wanted nothing to do with “another login.”
So instead of trying to standardize everything, we helped them standardize what mattered most: the workflows that decide revenue, reputation, and operational truth.
Where the chaos was actually coming from
If you looked at these companies one by one, none of their problems sounded dramatic.
At Company A, missed calls weren’t followed up consistently.
Company B treated cancellations like a dead end instead of a second chance.
At Company C, dispatch ran solely on heroic effort and tribal knowledge.
Company D handled online reviews well—when someone remembered to do it.
Company E’s reporting looked fine until someone asked, “Can we trust it?”
But when combined, these seemingly minor issues combined to create measurable drag:
uneven customer experience
uneven branch performance
uneven reporting integrity
growing dependence on time-wasting manual cleanup to keep everything running
A crucial revelation
The turning point came when leadership realized they weren’t trying to standardize software. They were trying to standardize outcomes.
To do that, they needed to build an operating layer, not a new platform. Something that would yield consistent, uniform results without requiring a massive software upheaval. In other words, something we do every day at The Graphite Lab.
A straightforward approach to AI, designed for the skilled trades
It starts with a tool: a single building block, like those toy plastic bricks. The tool has a clear input and output.
When the tool executes, it’s called a run.
When more than one tool is needed to create a workflow, we stack them into an assembly—a workflow that solves a real problem from beginning to end.
Why our approach is ideal for aggregators
Our modular structure enables you to deploy the same portfolio standards across all of your brands, and then customize the last 10% or so to match the way each company actually runs.
So instead of forcing five teams into one rigid process—disrupting all five—you install a consistent operating layer that makes the basics happen everywhere, while still using the workflow that each company is familiar with.
A step-by-step approach to improvements
The most common problem was portfolio leak; missed lead opportunities that weren’t being followed up.
We started there, deploying assemblies that would:
send an automated text message when a call was missed
evaluate cancellations to determine if an opportunity could be won back—and automatically engage the customer
These were simple changes that didn’t require training or asking the staff to change the way they worked. They just closed the “someone should follow up” gap that was quietly killing revenue.
Within weeks, leadership saw a difference across all brands, because the system was automatically following up on every lead.
What we standardized next: visibility when things go sideways
One of the most frustrating parts of running multiple service businesses is that leaders often learn about problems late.
a customer is upset, but nobody escalated it
a tech is running behind, but the manager only finds out after the complaint
a call went badly, but the team doesn’t know until the angry review hits
So we built “early warning” into the operating layer with a few assemblies that made a profound, immediate difference.
When a Call IQ analysis detected an angry customer, the designated manager was immediately notified
There was also an immediate notification when a technician was running late
These simple assemblies prevented relatively small issues from becoming cancellations, reschedules, or damage to each brand’s reputation.
Where the calm really showed up: reputation management after hours
Reviews are one of the hardest things to run consistently across brands, because the triggers happen outside business hours. Response quality varies by the person responding. And “we’ll do it later” turns into “we forgot.”
So we standardized the review workflow portfolio-wide:
When a job was completed, we provided the customer with a link to an internal reviewing system.
If the review was positive, the system prompted the customer to submit a Google review.
If a Google review was posted, the job and rating were identified, with the rating and review stored on the job record.
When a review was posted, the system generated a tailored response appropriate to the customer’s sentiment.
If the review was negative, the system emailed a manager a short summary with a link.
Yes, speed increased. But consistency was even more important.
Five companies now used the same procedure:
happy customers got nudged at the right time
negative reviews got attention fast
responses sounded like a real business, not some generic template
Everything became easier because the workflow was built into the system.
The quiet foundation: getting the data to stop fighting the portfolio
Every aggregator eventually runs into this: you can’t compare what isn’t categorized the same way. You need usable data coming from every business.
So along with the customer experience workflows, we deployed a set of continuous data hygiene assemblies that ran in the background:
When a job was created, the correct customer type was validated based on the customer’s name.
The customer was also identified, and any missing geo-coordinates were added to that customer’s location.
When a phone number was added, it was automatically validated and its type corrected.
When a location was created, the system looked up property details and stored them as custom fields.
When a call was analyzed and a job booked, the system flagged whether the correct campaign was used.
When those small corrections happen automatically, reporting becomes less of an argument and more of a tool. Branch comparisons become fair. Marketing attribution becomes clearer. And operational Key Performance Indicators stop needing footnotes.
Prebuilt first, then custom where needed
Not everything needs to be customized when you automate your portfolio. Most of the highest-impact workflows are usually shared.
In this case, we used proven prebuilt assemblies across all five companies to achieve the fastest results. Then we only had to customize for specific differences between companies; issues like:
different operating hours and after-hours routing rules
different ways of handling membership
different escalation paths by region
different tone-of-voice requirements by brand
That’s the real advantage of assemblies: you can start with a consistent core model, then plug in tools according to the differences between companies.
Not a big bang. Definitely a big improvement.
This wasn’t a “big bang” rollout. It was a steady shift from reactive to repeatable operations.
Within the first 60–90 days, the aggregator reported improvements:
fewer leads disappearing after missed calls
more consistent win-back attempts after cancellations
faster escalation when customers were upset
fewer “we found out too late” dispatch surprises
reviews handled more consistently across brands
cleaner data that made portfolio reporting less painful
But the biggest change was cultural. Branch leaders weren’t being forced to follow some new corporate playbook.”They were given a system that made the playbook happen automatically, with less work.
What the aggregator took away
They didn’t “implement AI.” And they stopped trying to force five teams into one rigid process.
Instead, they standardized the workflows that created revenue, protected reputations, and produced trustworthy visibility—then let their local leaders run their businesses.
That’s the path from chaos to calm in a growing portfolio.
If you’re an aggregator dealing with multiple brands, inconsistent outcomes and a limited appetite for disruption, get in touch with us at The Graphite Lab. We can help you map the workflows that need the most help, and build them into the stack your teams already use.