Good to Great SaaS
The Thesis
Three claims. Everything else is evidence.
01
The founding belief is mis-scoped: it treats a GTM system-design problem as if it were primarily a data and visibility problem.
02
The real problem is GTM system integrity — the incentives, definitions, and behaviors upstream of the tools that determine what enters the pipeline before any product sees it.
03
Agentic AI doesn't fix a mis-scoped belief. It executes it faster and cheaper — making the mis-scoping economically non-optional in 2026.
Why now
Thirty years of flat median performance. The two largest players consumed by integration. The technical capability to solve the real problem — continuous GTM system diagnosis at scale — arriving simultaneously with maximum category distraction. The premium is unassigned. The window is open.
The Example · Revenue Intelligence / RAO · February 2026

Thirty years
of the wrong
question.


The Revenue Intelligence category has been built, funded, and validated on a belief that has never been seriously interrogated: that revenue predictability is a data and visibility problem. If sales leaders can see the pipeline clearly, completely, and in real time, they can hit the number reliably.

Four generations of tooling — from Siebel's Sales Force Automation in 1993 to Gartner's Revenue Action Orchestration rename in 2025 — have been progressively more sophisticated answers to that question. None has changed the question. Gong built a $7.25B business on it. Clari and Salesloft merged inside it. Every vendor in the December 2025 RAO Magic Quadrant celebrated it.

The problem is the median forecast accuracy still sits at 70–79%, only 7% of organizations break 90%, and those numbers have not materially moved in years — despite full category adoption, despite Gartner validation, despite billions in enterprise software investment.

Agentic AI is now attacking the business model of SaaS itself. And the incumbent response — racing to integrate agents into existing products — is the wrong race. Not because agents aren't powerful. Because agents executing a mis-scoped belief faster still produce the wrong answer. Just more efficiently.

Forecast accuracy — median org
70–79%
Unchanged for years despite mass adoption of RI tooling. Benchmarks from Challenger Inc. and multiple 2024–2025 forecasting studies, after Gong, Clari, and peers reached meaningful enterprise penetration.
Orgs breaking 90% accuracy
7%
The thin tail of elite performers exists — and proves the tools can amplify a good system. The question is why the middle hasn't moved.
Category age of founding belief
32 yrs
The belief that revenue predictability is a data and visibility problem has been dominant since Siebel Systems in 1993. It has not been successfully challenged from within the category.
Software market cap wiped, Feb 2026
~$2T
Est. based on drawdowns across major software indices as agentic AI disruption repriced the sector. Atlassian first enterprise seat count decline. Salesforce revenue growth decoupled from license growth.
Section 01

Thirty Years of Answer Innovation
Without Problem Innovation

How the category got built — belief by belief — and why the chain ends not with a reset but with a vocabulary upgrade and a merger that consolidates inside the old belief at scale.

1956–1987
Generation Zero — The Physical Record
Pipeline management was a physical act. The Rolodex organized contact information on cards. Customer data lived in filing cabinets and salespeople's heads. Database marketing in the early 1980s introduced systematic digital capture. The first orthodoxy was implicit and unquestioned: better-organized information about customers produces more predictable sales. That belief became the category's foundation — and has never been replaced.
Founding belief formed
1987–1999
Generation One — Sales Force Automation
ACT! digitized the Rolodex. Tom Siebel took the logic further: sales organizations miss their numbers because managers can't see what's in the pipeline. Sales Force Automation would fix that. Siebel coined "Customer Relationship Management" in 1995 and held 45% market share by the late 1990s. The founding belief had its first massive commercial validation. The fact that visibility didn't make the number more predictable was explained as an execution issue — not a structural problem. That explanation has been running ever since.
First $1B+ validation of mis-scoped belief
1999–2012
Generation Two — Cloud CRM
Marc Benioff launched Salesforce with a genuinely disruptive move — not a reframe of the problem, but a reframe of how the solution was delivered. "No Software" eliminated the IT dependency and the upfront license cost. It was one of the most successful business model pivots in enterprise software history. The founding belief was completely unchanged. Salesforce crossed $1B in revenue in 2009. The mis-scoped belief had its second enormous commercial validation — this time made accessible to the entire market, not just enterprises that could afford Siebel implementations.
Business model shift — belief unchanged
2015–2023
Generation Three — Conversation Intelligence
By 2015, three technologies had matured: cheap cloud call recording, good-enough speech-to-text, and pattern-recognizing machine learning. Gong combined them. The founding insight was accurate: CRM data is filtered through rep incentives and therefore corrupt. Gong bypassed rep-entered data entirely, pulling signals directly from recorded interactions. "Operate on reality, not CRM fantasy." Gong reached a $7.25B peak valuation in 2021. It was a genuinely better answer to the same question. Reading the thermometer more accurately is not fixing the temperature — but it turned out to be worth $7.25 billion.
Recombination — better answer, same question
Dec 2025
Generation Four — Consolidation and Rename
Clari and Salesloft completed their merger, creating a ~$450M ARR entity positioned as a "Predictive Revenue System." Two weeks later, Gartner published the first Magic Quadrant under the label Revenue Action Orchestration. All 12 evaluated vendors welcomed the rename enthusiastically. The merger is a consolidation play inside the existing belief. The rename is Gartner drawing a larger circle around more features. The founding belief — revenue predictability as a data and visibility problem — is unchanged in both. Unanimous applause from incumbents is the fingerprint of a vocabulary upgrade, not a belief shift. The rename buys 18–24 months of renewed engagement and multiple support. It does not change where the runway ends.
Vocabulary upgrade — question unchanged
The Pattern
Thirty-two years. Four generations. The question has never changed.

Each generation built enormous value — Siebel, Salesforce, Gong are among the most successful enterprise software companies ever created. The mis-scoped belief is not commercially worthless. It is commercially finite. And the market is beginning to price the ceiling.

Section 02

The Agentic Disruption — and Why the Incumbent Response Is the Wrong Race

Agentic AI is not a feature upgrade. It is an architectural attack on the SaaS business model. Understanding the attack — and why incumbents are misreading it — is essential context for everything that follows.

SaaS was built around a human doing work inside software. Agentic AI inverts that: the agent does the work, and the software becomes a data layer the agent queries in the background. When the user disappears, the per-seat license disappears with them. This is not a product quality problem or a competitive threat from a better vendor. It is an architectural attack on the business model itself.

The market has begun pricing it. Between January and February 2026, roughly $2 trillion in market capitalization evaporated from the software sector as agentic AI disruption repriced the category. Atlassian reported enterprise seat count declining for the first time in company history. Salesforce revenue growth decoupled from license growth. The shift is from access to capability to delivery of outcome.

Attack Vector 01
Seat Compression
When one AI agent can accomplish the work of five users, the rationale for maintaining five licenses evaporates. Enterprise customers are reducing seat counts at renewal rather than expanding them. The per-seat model that underpinned SaaS economics for twenty years is structurally broken — not declining, broken.
Attack Vector 02
Interface Bypass
SaaS companies built moats around user interfaces — complex, sticky workflows that trained users to operate inside their systems. Agents don't need the interface. They query the API or underlying database directly. The interface moat — years of UX investment, onboarding investment, adoption investment — becomes a liability when the primary user is no longer human.
Attack Vector 03
Collapse from Below
Call recording and transcription — Gong's original technical moat — now starts at $19/user/month from Fireflies.ai, versus Gong's bundled ~$250/user/month. Commoditization by a factor of ten. The premium the market paid for that technical foundation has evaporated and has not been replaced by a new reason to pay premium pricing.
The Critical Distinction
Every incumbent is racing to integrate agents into existing products. That is the wrong race.

Not because agents aren't powerful. Because agents executing a mis-scoped belief faster still produce the wrong answer — just more efficiently, at lower cost, and with less reason to pay for a premium platform.

The question incumbents are asking: "How do we add agents to what we already sell?" The question that actually matters: "What does the existence of agents make possible that we couldn't do before?" Those are not the same question. Every vendor in this category is asking the first one. Nobody is asking the second. That gap is the opportunity.

Data as the moat — what incumbents actually have. The disruption does not mean incumbents are without assets. What companies like People.ai, Gong, and Clari hold that AI-native entrants cannot easily replicate is labeled outcome data: behavioral signals linked to deal outcomes at scale across thousands of companies and deal types. That causal signal — action to outcome — is what makes a revenue agent genuinely predictive rather than merely observational.

The strategic prescription follows: maintain the SaaS business to generate cash and accumulate data, build or acquire an AI-native agent layer trained on proprietary labeled outcome data, migrate customers from seats to outcomes. Cannibalize yourself before someone else does it with your own data. The window is 18–36 months before AI-native entrants accumulate enough customer data to close the gap.

Section 03

The Seams — Where the Mis-Scoped Belief Shows Up as Patterned Friction

Seams are not one-off misses or normal competitive friction. They are places where the founding belief is visibly failing in the field — and where the category's own explanations don't hold up under pressure.

Seam 01
The Forecast Gap That Doesn't Close
Fewer than 20% of sales leaders rate their forecast accuracy as predictable. 43% miss their revenue targets by more than 10%. The median sits at 70–79% accuracy. Only 7% break 90%. These numbers are from 2024–2025 benchmarks — after Gong, Clari, Salesloft, and peers had reached meaningful enterprise penetration in their core markets. The tools are working as designed. The problem hasn't moved. Which means the category has been solving the wrong problem with increasing precision.
Challenger Inc., Jan 2024 · Multiple 2024–2025 forecasting benchmarks
Seam 02
Stack Fragmentation as Failure Signal
It is now common to see enterprise revenue teams running Gong, Clari, Outreach, and People.ai simultaneously — four platforms each claiming to address revenue predictability, in parallel. The Clari-Salesloft merger is explicitly premised on resolving this fragmentation. But if the tools each solve a different piece of the visibility puzzle, consolidating them should solve the whole puzzle. The fact that buyers have been running multiple tools for years and the number still doesn't move is evidence that the puzzle itself is wrong — not that the pieces haven't been assembled yet.
Signal: Multiple tools · Same miss rate
Seam 03
Commoditization of the Core Moat
Gong's valuation compressed from $7.25B to approximately $4.5B while ARR grew substantially — the multiple contracted as the business grew. The market is not pricing Gong's execution. It is pricing the ceiling on the conversation Gong owns. AI-native competitors have commoditized the original technical moat by a factor of ten. The premium that justified the category's leading valuation has evaporated and has not been replaced by a new reason to pay premium pricing. This is the forward scenario for every company that stays inside the orthodoxy.
Gong peak valuation $7.25B, 2021 · AI-native pricing $15–19/user vs ~$250/user
Seam 04
The Vendor–Customer Quote Gap
Vendors talk about AI superpowers, revenue transformation, and every rep becoming the CRO of their territory. Customers, in their own words on Reddit, LinkedIn, and practitioner forums, describe staying organized, not forgetting follow-ups, and saving hours on admin. The distance between those two conversations is the tell. The founding belief produces tools that help reps be more organized and managers see their pipeline better. That is what people use them for. Vendors describe the belief they want to sell. Customers describe the product they actually use.
Gartner Peer Insights · Reddit r/sales · LinkedIn practitioner commentary
We're always explaining why we missed instead of actually fixing the way we sell.
Revenue leader
LinkedIn commentary
I call it the floating bar problem — sales silently lowers the bar for admission into the pipeline. The result is fake pipeline that creates an illusion of coverage which disappears as the quarter progresses.
Dave Kellogg
Kellblog
Our Q4 is full of fake pipe leadership is forcing reps to keep in to keep their forecast up.
Anonymous sales leader
Reddit r/sales
None of them actually work unless the underlying data is accurate. If your data isn't clean, structured, and governed, your forecast is basically a vibe check.
Brad Rosen
LinkedIn
The Seam in One Paragraph
The downstream is where the evidence of the loss shows up. It is not where the loss happens.

The category has built progressively better forensic tools and called them revenue systems. Forensics does not prevent the crime. It describes it more accurately. The fact that description has become more accurate has not changed the crime rate.

Section 04

The Real Problem, Named

The founding belief is mis-scoped, not false. The distinction matters — and the success cases prove the thesis rather than contradict it.

The tools do work — for some organizations. The top 5–10% of companies that implement revenue intelligence platforms in already-disciplined GTM environments reach 90%+ forecast accuracy and see real win-rate lifts. Case studies claiming 95% forecast accuracy and 35–44% higher win rates are real and defensible.

But the median organization sits at 70–79%. Only 7% break 90%. Those numbers haven't moved. The distribution is the diagnostic. A thin tail of elite performers. A fat middle stuck at the same accuracy band. Mass tooling adoption. No structural change at the median.

The success cases are not counter-evidence to the mis-scoped thesis. They are the proof of it. When a company succeeds with Gong or Clari, the causal story is almost never "we got better dashboards." It is: they used the tool as leverage to fix something the product doesn't officially address — stage discipline, incentive alignment, process governance, rep behavior. The tool became an enforcement mechanism for a GTM system redesign the category never named, never sold, and rarely took credit for.

The Sharpest Version
The category can amplify a good GTM system. It has never proven it can fix a broken one. Most of the market has a broken one.

The structural problems — incentive design, rep behavior, process integrity, how strategy actually gets executed — remain largely untouched by everything the category has built. Better dashboards on top of a broken system produce better-looking dashboards of a broken system.

The Execution Layer — Where All Current Vendors Compete
Interventions on activity that is already happening
Every current vendor competes here. This is the layer where tooling lives, where AI gets added, where agents will be integrated.
Workflow optimization and engagement sequencing
Call coaching and conversation intelligence
Pipeline scoring and deal risk detection
Forecast generation and AI-guided predictions
Activity capture and CRM data enrichment
The Design Layer — Where the Problem Actually Lives
Structures that determine behavior before any tool sees it
This is where behavior is shaped. When this layer is wrong, no execution layer tool can fix it — they can only describe the damage more accurately.
Compensation weighted toward volume or quality
Quota set on aspiration or capacity evidence
Stage definitions that are load-bearing or aspirational
Manager incentives pointed at early funnel or late-stage heroics
Qualification gates that hold or move under pressure

When the design layer rewards coverage optics over pipeline quality — when board pressure, quota optics, and coverage ratios make it rational to enter bad deals — the execution layer inherits those deals as facts and works around them. The forecast is built on corrupted inputs. Better forecasting of corrupted inputs does not produce a more accurate forecast. It produces a more confident expression of the same wrong number.

This is Dave Kellogg's floating bar problem at the structural level. The bar moves not because reps are incompetent or managers are negligent. It moves because the entire organizational incentive structure actively rewards it. "Leadership is forcing reps to keep it in." That mechanism runs higher than any product layer can reach.

What agentic AI does and doesn't change here: An agent that automates data capture, surfaces pipeline insights, and triggers follow-up actions — but leaves incentive design, stage definitions, and behavioral governance untouched — simply automates the enforcement of a broken system. The instrumentation becomes cheaper and more continuous. The system it is measuring does not become healthier. Faster wrong is still wrong.

Section 05

The Opportunity — Continuous GTM System Diagnosis

What the product is. Why AI makes it possible now for the first time. Why consulting must lead — and why consulting is the wedge, not the destination.

The current category sells dashboards that show the output of a revenue system. The available opportunity is a product that shows where the system itself is failing — and why — before the output degrades.

Not: "see your pipeline more clearly." But: "your stage 3 to stage 4 conversion has dropped 11% in the last six weeks specifically in enterprise deals over $200K. The reps moving deals forward fastest are skipping technical validation. Your forecast is being systematically inflated by three reps with a consistent pattern of late-stage pulls." That is a real-time picture of where system integrity is breaking down — specific enough to act on, derived from behavioral and outcome data the product is already sitting on.

Nobody sells this. The category sells dashboards that show the output of the system. This is a product that shows where the system is failing.

Why AI Makes This Possible Now
Cross-organizational pattern recognition plus real-time anomaly detection — at a scale no human analyst team could match.

The inputs have existed for years. Gong has conversation data. Clari has forecast behavior data. People.ai has the full activity graph. What was not feasible was assembling those inputs into a continuous, real-time picture of systemic health across thousands of accounts simultaneously — and comparing each account's behavioral patterns against what healthy GTM systems actually look like at scale.

That is what agentic AI makes possible for the first time. Not automating the rep's workflow. Watching the health of a GTM system continuously and surfacing where integrity is breaking down before it shows up in the quarter's number.

The two-stage model. Stage one: use behavioral and outcome data across thousands of organizations to define what a healthy GTM system actually looks like — not generic best practices, but the specific combination of stage discipline, forecast behavior, incentive structure, review cadence, and rep behavioral patterns that correlate with above-median outcomes. That is the model you train, and it is what incumbents with deep labeled outcome data are uniquely positioned to build.

Stage two: apply that model continuously to each customer's real-time behavioral data to surface where integrity is breaking down. The cross-organizational pattern data defines health. Continuous application identifies specific, actionable breakdowns — before they show up in the number.

Why consulting must lead — and why it's the wedge, not the destination. The design layer problem — incentive architecture, quota structure, compensation design — is not a software feature gap. It is an organizational design problem. Software sits in the measurement layer. The real problem lives in the incentive layer. A vendor that names the real problem and then hands the buyer a forecasting platform has destroyed their credibility at the moment they built it.

This is why no current vendor has named the real problem publicly. Not because they haven't seen it. Because naming it honestly would expose the ceiling of their own product in the same breath.

The consulting entry point resolves this. The consulting group carries the credibility to walk into a CRO conversation, diagnose where the GTM system's incentive architecture is wrong, and actually fix it — not with a dashboard, with a restructured operating model delivered as an engagement, with real outcomes that accumulate as public proof. The product brings what consulting alone cannot build: scale, repeatability, and survival without the consultant permanently in the room. Consulting is the wedge and the proof engine. The long-term value is in an outcome-priced product that runs without permanent services drag. A PE-backable P&L looks like 10–20% services revenue riding on 80–90% software and outcomes, with clear attach logic as engagements convert to product seats.

The Conversation This Enables
Not "better visibility into your pipeline." But: your forecast problem is a symptom. The disease is a GTM system running on misaligned incentives, inconsistent process, and undetected behavioral drift.

That conversation lands at the CRO level, not the RevOps level. It justifies a fundamentally different price point because it is solving a business problem, not delivering a dashboard. It creates proof that is completely different — not "our platform improved your forecast accuracy by 15%" but "we identified three systemic breakdowns in your GTM, you fixed them, here is what changed in your number." And it makes agentic AI central to the value proposition rather than incidental.

Section 06

The Vendor Landscape — Who Has What, and Why They Haven't Used It

Every meaningful player assessed against three criteria: data to address GTM system integrity, institutional capacity to name the mis-scoped belief, and architectural readiness for continuous diagnosis and outcome pricing.

The central tension in this landscape is structural, not strategic. The companies with the richest data to solve the real problem are the most institutionally committed to not naming it — because doing so requires publicly invalidating the story that justified their last valuation, their current multiple, and their investors' return thesis. That is not a strategic challenge. It is a psychological and fiduciary one. The board that approved the last round on the existing story is unlikely to approve the narrative that renders it incomplete.

Player Strategic Position Data for GTM Integrity? Can Name the Problem? Reset Proximity
Gong
~$300M ARR
Category leader by Gartner execution and vision scores. Largest conversation intelligence dataset. Strongest CRO brand. Technical moat commoditized 10x. Multiple compressed 38% from peak on growing ARR. Partial
Conversation content data is deep but one-dimensional. Lacks full activity graph and outcome-linked behavioral data across the full motion.
Structurally blocked
Internal mythology organized around "operate on reality." Naming the mis-scoped belief requires repudiating the founding move. Very high fiduciary bar.
Furthest traveled — and most locked. Well-positioned structurally, narrative locked by peak valuation story. The company most likely to be disrupted if a genuine reframe arrives.
Clari + Salesloft
~$450M ARR combined
Largest revenue technology entity by ARR. Merger completed December 2025. Leadership consumed by integration for 18–24 months. Combined dataset theoretically the richest in the category. Potentially yes
Combined data — forecast behavior, engagement sequences, pipeline history — could surface the upstream problem. Integration must complete first.
Unavailable for 18–24 months
Leadership attention is on integration execution. The founding narrative ("Predictive Revenue System") is the false problem stated more completely.
Most capable on paper. Most consumed in practice. The company whose integration window creates the category opening. If the window closes while they're looking elsewhere, they won't get it back.
People.ai
~$63M ARR (est.)
Activity capture and behavioral transaction data across the full account motion. Gartner Visionary 2025. No funding since 2021 ($1.1B set August 2021). New CEO October 2025, four months in. Yes — strongest mid-tier
Full behavioral transaction history linking rep activity to account motion and deal outcomes. The most credible labeled outcome data foundation outside the top three.
Unconfirmed
Four months into new CEO tenure. Public framing still executes existing orthodoxy with more precision. The question is whether Jason Ambrose has a different read on what the asset is actually for.
The most interesting mid-tier position. Enough data history to build the real product. Enough distance from the top players to move without repudiating a $7B valuation story. The conviction question is the only variable that cannot be confirmed from outside the room.
Aviso AI
~$30–50M ARR (est.)
ML-native forecasting architecture. WinML time-series-aware deal scoring. Built to replace human judgment at the forecast call. AI-native from the start — no legacy conversation intelligence architecture to defend. Directionally yes
Forecasting-focused data. Sees the upstream corruption in its own results — deals scored highly that close badly. Hasn't fully theorized why.
Closest — not there yet
Narrative still framed as "better forecasting accuracy." That is a better answer to the mis-scoped belief, not a rejection of it. But the direction is correct.
Highest reset proximity of any current player. AI-native architecture means no legacy moat to defend. The move from "better forecasting" to "GTM system integrity diagnosis" is shorter for Aviso than for anyone else in the category.
Outreach
~$200M ARR
Third RAO Leader. Engagement-led platform evolving toward full GTM. Lost sales engagement momentum to Salesloft pre-merger. New CEO from ServiceNow, four months in. Thin
Engagement sequence data without the full behavioral outcome graph. Sees the execution layer, not the design layer.
Follower position
Has not led the category conversation. Unlikely to reframe a category it has not led.
Near-term beneficiary of Clari-Salesloft integration window. Long-term position depends entirely on whether the new CEO starts a different conversation or executes the existing one with more energy.
AI-Native Entrants
Sub-$30M ARR
Fireflies.ai, Grain, Fathom, tl;dv. Conversation intelligence at commodity pricing. No intelligence layer beyond transcription and summary. Not yet
Accumulating raw data but lack the labeled outcome depth that makes it analytically useful for GTM system diagnosis.
No legacy constraint
No old story to protect. The category entrant most likely to start the new conversation without fiduciary barrier — but currently lacks the data depth to make it credible.
Commoditization threat, not reset candidates today. Accelerating the conditions that make the reset necessary. The entrant that builds labeled outcome depth fastest has the cleanest path to the open position.
What the Table Is Saying
The companies with the data can't make the move. The companies that can make the move don't have the data.

Except for one mid-tier position — and one AI-native architecture — where the constraint is not structural but narrative and conviction. Those are the positions worth watching in the next 18 months.

Section 07

The Unclaimed Space — Why the Room Is Still Empty

Four compounding reasons the genuine reset hasn't happened. And what the company that asks the right question about AI finds on the other side.

A genuine reset requires three simultaneous conditions: a move that names the real problem, a narrative the CRO repeats without slides, and proof that accumulates publicly and survives the champion leaving the room. None of the current vendors meet all three. The room is empty for compounding reasons.

01
The Largest Incumbents Are Distracted
Clari and Salesloft — the two entities that between them had the most capacity to respond to a genuine reframe — are now a single entity consuming leadership attention on integration for 18–24 months. Gong is defending its core against AI commoditization. Nobody is watching the door that just opened. The companies most capable of making the move are the least available to make it.
02
Thirty Years of Flat Median Performance Have Maximally Exposed the Old Belief
The founding belief has never been more visibly mis-scoped. 7% of organizations at 90%+ accuracy after full category adoption. The conditions for a new belief to take hold — buyer recognition that the existing story isn't delivering — are as ripe as they have ever been. The recognition proof for a genuine reframe is sitting in the field waiting to be claimed.
03
The Technical Capability Has Just Arrived
Continuous diagnosis of GTM system health at the signal level — synthesizing behavioral patterns, forecast behavior, deal progression anomalies, and outcome correlations across thousands of accounts simultaneously — was not feasible before large-scale agentic AI. The inputs existed. The capacity to assemble them into a real-time picture of systemic health did not. That capacity has arrived roughly simultaneously with the moment the category is most distracted and the old belief is most exposed. That timing is unusual.
04
Incumbents Are Asking the Wrong Question of AI
This is distinct from the business model disruption in Section 02. Every major vendor is asking: "How do we bolt agents onto what we already sell?" That is a product roadmap question. The question that actually matters is different: "What can agents see and fix that humans couldn't — and what does that make possible in this category for the first time?" The first question accelerates the existing story. The second question reveals the unclaimed space. Nobody in this category is asking the second question. That misreading is what keeps the room empty — and what makes the room valuable to the company that walks in.
What the Company That Asks the Right Question Finds
An uncrowded room. A mis-scoped belief maximally exposed. A technical capability that makes the real solution possible for the first time. And no coherent competitive response from any current vendor for at least 18 months.

The reset will be recognizable by one primary signal before it shows up in market data: the how question will arrive unprompted. A CRO will hear the new frame and respond not with "interesting" but with "how?" That is the diagnostic that the new belief has landed — it means the buyer privately held the diagnosis and the vendor gave it language. The company that produces that response reliably, and has a specific and credible answer ready, is not competing for position in Revenue Intelligence. It is defining the next category.

That company doesn't exist yet. That position is currently open.

Section 08

Strategic Implications — By Player Type

What to do, depending on who you are. Formatted for a time-pressed reader.

If you are a legacy incumbent — Gong, Outreach, People.ai
Your data is the moat. The cannibalize-yourself sequence is the only defensible move.
  • Your labeled outcome data — behavioral signals linked to deal outcomes at scale — is the asset AI-native entrants cannot easily replicate. That window is 18–36 months before they accumulate enough customer data to close the gap.
  • The move: maintain the SaaS business to generate cash and data; build or acquire an AI-native agent layer trained on proprietary labeled outcome data; migrate customers from seats to outcomes; change the conversation about what you sell. Not "revenue visibility platform." "A revenue agent that knows what winning looks like in your segment because it has seen it ten thousand times."
  • The failure mode to avoid: racing to integrate agents into the existing conversation. That is an acceleration toward commoditization, not a differentiation strategy.
  • The diagnostic question to ask internally: is your product roadmap asking "how do we add agents to what we sell" or "what can agents see that we couldn't see before?" If it's the first question, the roadmap is the wrong race.
If you are the merged entity — Clari + Salesloft
The integration window is your exposure window. The combined dataset is your deferred opportunity.
  • The merger's strategic logic is coherent. The execution challenge is real and time-consuming. Integration will consume leadership attention for 18–24 months — exactly the window when a challenger with the right conversation could establish market position.
  • If a new entrant names the real problem during the integration window and begins accumulating recognition proof, your eventual return of attention to category narrative finds a new orthodoxy already forming.
  • The combined dataset — forecast behavior, engagement sequences, pipeline history — is theoretically the richest in the category for diagnosing GTM system integrity. Building that capability into the product roadmap now, before integration completes, is the only way to use the asset before the window closes.
  • The board question worth asking: what is the plan if the category conversation changes during integration? "We'll respond when we're ready" is not a plan. The time to build the new conversation is during the integration, not after it.
If you are an AI-native challenger or new entrant
No legacy story to protect is your structural advantage. Use it before the incumbents finish integrating.
  • The open position: build from scratch on the right belief. No legacy conversation intelligence architecture to defend. No existing sales motion built around the mis-scoped story. No board whose return thesis is built on the existing narrative.
  • The entry sequence: consulting-led diagnostic engagement to establish CRO-level credibility and build public proof architecture; product that codifies the diagnostic into a continuous system; data accumulation through early customers that builds the labeled outcome dataset; eventual pricing on outcomes rather than seats.
  • The data gap is real but time-limited. Speed of customer accumulation determines the window. The labeled outcome depth that incumbents have took years to build. The entrant that moves fastest on the right question has the cleanest path to the open position.
  • The single most important signal to watch: when does the unprompted "how?" start arriving in sales conversations? That is the confirmation that the new belief has landed. Before that, you are building. After that, you are winning.
If you are a PE firm or M&A advisor
The premium is genuinely unassigned. Here is what to test for and how to tell a real reframe from a vocabulary upgrade.
  • Discount any investment thesis that assumes forecast accuracy and win-rate improvements from "better visibility" or "AI-enhanced RAO" alone. That is the existing story at higher cost. The multiple ceiling is already visible in the Gong compression story.
  • Underwrite material upside only when there is a credible GTM system-design intervention plus tooling. The consulting entry point plus outcome-priced product is the model. Services as wedge, not destination.
  • The valuation premium available to the first genuine reframe in a $50B+ category is material and currently unpriced. The company with customers describing their problem in the vendor's language — not the category's language — commands a multiple expansion no amount of feature development inside the mis-scoped belief can produce.
  • The diagnostic questions that separate a real reframe from a vocabulary upgrade: Does leadership describe the problem in category language or in the language executives use privately? Can the CEO articulate in one sentence what the category's founding assumption got wrong? Would naming the real problem make any current vendor uncomfortable? If all incumbents applaud it, it is a vocabulary upgrade, not a reframe.
The Diagnostic Questions — For Any Asset in This Space
01
Does leadership describe the problem in category language — visibility, orchestration, intelligence — or in the language executives use privately?
Listen for: Category language means the company is selling the existing story more efficiently. Private language means someone inside the room has seen the real problem.
02
Can the CEO articulate, in one sentence without slides, what the category's founding assumption got wrong?
Listen for: A CEO who has a specific answer to this question has interrogated their own position. A CEO who deflects has not. The deflection is informative.
03
Is the company already behaving as if the new belief is true — in pricing, compensation, product roadmap — or only talking about it?
Listen for: Belief shows up in behavior before it shows up in narrative. A company whose comp plan still rewards seat expansion while their narrative talks about outcomes has not yet made the move.
04
Would naming the real problem make any current vendor uncomfortable? If all incumbents applaud it, it is a vocabulary upgrade, not a reframe.
Listen for: Unanimous applause is the fingerprint of a vocabulary upgrade. A genuine reframe exposes the ceiling of what incumbents have built. At least one of them should be uncomfortable.
05
Is the company asking "how do we add agents to what we sell" or "what can agents see and fix that humans couldn't?"
Listen for: The first question is a product roadmap. The second question is a category move. Companies asking the first question will produce faster versions of the existing story. Companies asking the second question may produce the next one.
The Reset Read

What it is, what it costs, how it fits your process.

The Reset Read is a fixed-scope commercial diligence engagement designed to run alongside your existing workstreams. Two weeks from engagement start to final deliverables. Structured to integrate directly into the growth narrative section of the CIM and the investment memo.

See the Reset Read →