Good to Great SaaS
Revenue Action Orchestration · The Diligence Gap

Your commercial diligence
has a blind spot.


Standard diligence covers the financials, the legal, the tech, and the competitive position. It doesn't assess the state of the category belief — and whether the company you're pricing is on the right side of it. In B2B SaaS, that gap is where the multiple lives or dies. Here is what it costs — and what closing it produces.

Median private SaaS multiple
3.7×
Revenue multiple across 1,325 private software transactions, 2015–2025. Inside the existing story, executing well.
Top quartile — same story, better execution
7.2×
The ceiling for a company that answers the existing category question better than its competitors. The gap from 3.7x to 7.2x is execution. The gap above 7x is something else.
Conversation changers — new story, new ceiling
1525×
Gong peaked at ~15x ARR on a new conversation. Veeva ~20x. HubSpot ~25x. Piano repriced individual deals 12x on the same product. The gap between 7x and 20x is not execution. It is which side of the category belief the company is on — and that gap is currently undiagnosed in most diligence processes.
The proof case

The same product.
A different conversation.

I joined Piano in 2016 wary. They had a product for an industry that had struggled for twenty-five years — ever since the internet made information free. Publishers had been losing the battle that long. Many had stopped believing they could win it.

My job was to build a global and elite customer success team that could retain and grow the accounts. It became immediately clear to me that if we relied on each publisher to figure out how to build a new direct revenue stream on their own, most wouldn't be able to do it, and we would lose most of them. So I decided I had better figure it out and share the knowledge with all of them so they would succeed and grow with us.

I spoke with everyone at Piano who had come from the industry. Asked everything. Eventually something became clear: publishers knew their content was largely commoditized, but they kept trying to monetize it anyway. It made no sense. So I asked a different question. If content isn't the valuable asset — what is? The answer was their loyal audience. People who visited daily, spent time with them, loved being associated with them. People who would buy something, if it was worth buying.

"Don't monetize your content. Monetize your loyal audience." It was as simple and direct as that.

I'd start by listening to them, all the trials and tribulations. They were desperate for an answer. And every time I delivered it, the response was the same. Not "what features does your product have?" or "what's the price?" Just: "How? How do we do that?" Eager and relieved.

That's when I knew we were no longer in a product conversation. We were solving their most critical business challenge — and from that moment, there was no competition. A deal in Spain that should have renewed at $80K became a four-year, $1M partnership. A deal in Ukraine closed at $750K in a market where the average contract had been $30K. The product didn't change. The conversation did.

The gap

What standard diligence covers —
and what it leaves open.

A standard diligence process validates four workstreams. Each is rigorous within its lane. None asks the question that determines the multiple.

Workstream 01
Financial Diligence
Validates ARR, NRR, churn, cohort quality, revenue recognition, and financial projections.
Covered
Workstream 02
Legal Diligence
Validates contracts, IP ownership, regulatory exposure, and change-of-control provisions.
Covered
Workstream 03
Technical Diligence
Validates product architecture, scalability, technical debt, and engineering team quality.
Covered
Workstream 04
Commercial Diligence
Validates market size and competitive position. Does not assess the state of the category belief, each player's position relative to it, or whether a conversation change is underway that will expand or compress the multiple.
Partially covered — this is the gap
The lens

Three questions that don't appear
in any standard workstream.

These questions don't replace the four standard workstreams. They complete them. Each one connects directly to valuation — ceiling, mispricing, compounding.

Q 01
"Is the category Hot or Not?"
Hot means the founding belief has been dominant long enough, and is misaligned enough with what buyers actually experience, that a new conversation can take hold. Not means someone recently changed the belief and the market is still moving toward it — the play is to outrun, not reframe. This determines what kind of move is available and which companies are interesting. Getting it wrong means pricing a reframe opportunity as a ceiling play, or a ceiling play as something it isn't.
Signal: If the category leader is still telling the same story it told five years ago, look for where buyers describe the problem differently in private than vendors do in public. Those are the seams.
Q 02
"Is this company answering the old question better — or starting a new one?"
The former has a ceiling determined by the belief. It grows until the belief wears thin, then compresses — regardless of ARR growth, NRR, or competitive position. The latter is priced on the old story but valued on the new one. Standard commercial diligence identifies competitive position. It does not identify which side of this line the company is on. That distinction is where the multiple lives.
Signal: Ask the CEO what problem their best customers had before they bought. If the answer matches the category's founding story, they're answering the old question. If it doesn't, keep listening.
Q 03
"If a new conversation is available, does this company have what it takes to own it?"
Four conditions: a Hot category, a CEO with the conviction to lead the company's evolution and not flinch, a narrative precise enough that a CRO repeats it without slides, and — where present — a data asset that makes the new conversation defensible and hard to copy. Three of these can be partially assessed externally. One — conviction — can only be tested in the room. The management presentation is where you find out.
Signal: Push back on the core claim in the room. Borrowed conviction retreats to features or social proof. Real conviction gets more precise. That's the tell.
The lens applied · Revenue Action Orchestration · February 2026

Revenue Intelligence is Hot.
Nobody owns what comes next.

The founding belief — revenue predictability is a data and visibility problem — has been dominant for thirty-two years. Fewer than 20% of sales leaders rate their forecast accuracy as predictable after full category adoption. The median hasn't moved. The category has been solving the wrong layer of the problem with increasing precision.

Agentic AI is now attacking the business model of every company inside it — seat compression, interface bypass, and commoditization from below simultaneously eroding the moats incumbents built over twenty years. The incumbent response is to integrate agents into existing products. That is the wrong race: agents executing a mis-scoped belief faster still produce the wrong answer, just more efficiently. The two largest players are consumed by integration for 18–24 months. Nobody is watching the door that just opened.

The full analysis is in The Example — the category, the vendor landscape, the real problem named, and a specific read on People.ai: current ceiling, available conversation, and the questions the management presentation needs to answer.

The Example
A full Reset Read on Revenue Intelligence / RAO — the category analysis, the vendor landscape, and a company-specific read on People.ai: ceiling, available conversation, and the questions the management presentation needs to answer.
Read the Example →
Next step

I work with a small number of advisors on specific deals.

Get in touch →