Contact Us
article-poster
12 Dec 2025
Thought leadership
Read time: 3 Min
19k

When Three Expert Calls Contradict Each Other, You Don't Have Research—You Have a Problem

By Mark Pacitti

TL;DR: A PE fund had three expert network calls produce contradictory information on a SaaS acquisition. We delivered triangulated intelligence from 12 fresh sources. The deal team reduced their valuation by 15% after discovering churn was back-loaded, net expansion was flattered by a few large accounts, and pricing power was weaker than pitched.

What happened:

  • Three expert calls gave conflicting churn, expansion, and pricing data

  • We designed 360° research: customer surveys across cohorts, qualitative interviews, and channel checks

  • Pattern emerged: logo churn was concentrated in mid-market after 18-24 months, NRR was 5-10 points lower than reported, discounting was heavy on renewals

  • Valuation dropped 15% (roughly 1-1.5 ARR multiple turns) because growth quality was lower than the deck suggested

  • True cost of contradictory research: $8,000+ in fees and analyst time with no actionable answer

We completed a project for a fund evaluating a SaaS acquisition. They came to us after three expert calls through their network produced contradictory information. Two weeks later, we delivered triangulated intelligence from 12 fresh sources.

The deal team changed their valuation by 15%.

Not because we found a smoking gun. Because we found the pattern their network missed.

What Were the Initial Signals?

The buyer had a standard SaaS playbook working. Headline metrics from the CIM and management deck looked solid. Strong ARR growth, low logo churn, expanding ACVs. Their expert network had delivered positive calls with ex-employees and industry observers who endorsed the story. They were underwriting at the mid-range of market ARR multiples for the segment. Upside if net retention was as strong as advertised.

Then the contradictions started.

One expert said churn was minimal. Another said mid-market customers were leaving after 18 months. A third claimed pricing power was strong. Implementation partners reported heavy discounting on renewals. The contradictions were not subtle.

The question they brought to us: Are we over-paying or under-paying if we lean into this NRR story?

Key point: Strong headline metrics looked clean until expert calls started contradicting each other on churn, expansion, and pricing.

Why Do Expert Networks Miss the Pattern?

Expert network recruiters are compensated based on how many experts they recruit and calls they set up, not accuracy. Prospective experts exaggerate their areas of expertise to earn fees. Self-reported information differs from LinkedIn profiles. The model is profitable because it focuses on quantity over quality.

One senior operator told us he does several calls on the same company and topic in the same week. Multiple pods at the same fund and their direct competitors. Through a mix of big networks and aggregators. The briefs are nearly identical, the titles get tweaked, but it's the same conversation sold again at full custom match pricing.

When you get contradictory information from three expert calls, you have three anecdotes from people who might not have the full picture, filtered through a system where profit comes from volume, not accuracy.

Key point: Expert networks are incentivized for call volume, not data quality, which produces anecdotes instead of patterns.

How Do You Move From Anecdotes to Cohort Data?

We built a 360° primary research plan around three levers: churn, expansion, and pricing power.

Customer surveys: Structured B2B survey across multiple cohorts by size, vertical, and tenure to quantify logo churn, seat contraction and expansion, and reasons for change.

Qualitative interviews: Deep dives with current customers, lost customers, and key partners to understand renewal behavior, discounting, and competitive alternatives.

Channel and competitor checks: Discussions with implementation partners and former sales leaders to validate whether the pipeline and win-rate story matched what customers were saying.

Every datapoint was fact-checked for role and relevance. Responses were checked for internal consistency and against external benchmarks for similar SaaS businesses. This is what triangulation means in practice.

Triangulation validates interpretations by testing data validity through convergence of evidence from various sources and methods. When data from multiple sources line up, the credibility increases. The more your data converge, the more credible your results. Sometimes data from different sources contradict each other. The research is not incoherent. You need to dig deeper to make sense of why data are contradictory.

Key point: Triangulation forces convergence across multiple methods and cohorts, turning contradictions into signals instead of noise.

What Pattern Did the Networks Miss?

The logo churn number in the deck was technically true but economically misleading.

Three key findings emerged:

Churn was back-loaded and concentrated. Overall annual logo churn was close to the reported figure. But cohorts of smaller and mid-market customers were churning or downgrading after 18 to 24 months at a higher rate than management disclosed.

Net expansion was flattered by a small set of large accounts. A handful of enterprise customers with significant upsell masked stagnant or shrinking spend in the long tail. True NRR adjusted for customer size and tenure was 5 to 10 points lower than the top-line figure.

Pricing power was weaker than pitched. Customers reported heavy discounting on renewal and take-it-or-leave-it competitive offers from newer SaaS entrants. Raising prices in line with the model would cost volume.

Traditional expert calls had missed this because they skewed toward friendly or curated references and ex-insiders, not a structured sample of paying customers across segments. They produced anecdotes but were never turned into proper cohort analysis.

Key point: Logo churn was real but concentrated in mid-market cohorts, NRR was 5-10 points lower when adjusted, and pricing power was weak.

How Did This Change the Valuation?

We translated those findings into the model with the deal team.

ARR growth: Reduced forward ARR growth assumptions by several points to reflect weaker net expansion and higher churn in non-enterprise cohorts.

NRR: Reset the sustainable net retention rate in the base case down by 5 to 10 percentage points, with a more conservative path to improvement under the value-creation plan.

Valuation multiple: Given the lower quality and durability of growth, the buyer moved from the higher end of their target ARR multiple band to the middle, reducing the headline enterprise-value-to-ARR multiple by roughly 1 to 1.5 turns.

On this deal, the valuation they were willing to pay dropped 15% versus where they had started. Still a serious bid. But one aligned with the real economics of churn, expansion, and pricing rather than the marketing version.

SaaS valuations are tied to revenue growth, revenue predictability, margins, retention quality, and operational discipline. Unsupported normalization adjustments or add-backs signal risk to buyers. Risk slows the process, increases scrutiny, and compresses valuation. Clear documentation in due diligence helps stakeholders understand recommended valuation adjustments, proposed deal structures, or termination of negotiations if serious problems emerge.

Key point: The findings reduced the valuation by 15% (1-1.5 ARR multiple turns) because growth quality was lower than the deck suggested.

What Is the True Cost of Contradictory Information?

The true cost of research is vendor fee plus fully loaded analyst hours plus opportunity cost of what the analyst is not doing instead. Funds see the first line. They miss the rest.

A typical buy-side analyst all-in cost is $200k to $400k per year. At roughly 2,000 working hours per year, this is $100 to $200 per hour fully loaded. On a simple expert network project with 5 calls, an analyst spends 4.5 hours per call on defining the brief, scheduling, being on the call, and writing up notes. For 5 calls, this is 22.5 hours of analyst time. At $100 to $200 per hour fully loaded, analyst time on one $6k project is $2,250 to $4,500.

Add vendor fees of $6,000. Total direct cost: $8,250 to $10,500.

If 40% of calls are useless or off-target, only 3 of those 5 calls move the thesis.

Effective cost per useful conversation: $2,750 to $3,500 per useful call.

This is before you count the cognitive load and context-switching stopping the analyst from doing higher-leverage work.

When you get contradictory information from three expert calls, you burned $8,000+ and still have no answer. You have three competing narratives and no way to adjudicate between them.

Key point: True cost per useful expert call is $2,750-$3,500 when you count analyst time, and contradictory calls leave you with no actionable answer.

What Does Triangulated Intelligence Look Like?

Triangulated intelligence is not about doing more calls. It's about designing research where data forces convergence or surfaces meaningful divergence.

You need multiple methods: quantitative surveys for patterns, qualitative interviews for context, and channel checks for validation. You need multiple cohorts: current customers, lost customers, partners, and competitors, segmented by size, vertical, and tenure. You need fresh sourcing: experts who have not been recycled through the same databases your competitors are using.

Expert networks profit by recruiting members once and then enrolling them in as many studies as possible, spreading the cost of recruiting across multiple paying clients. Panel respondents receive dozens of surveys each week with specific qualifiers, and candidates are fed an almost complete list of qualifiers enabling them to game the system.

When you triangulate properly, contradictions become signals, not noise. If enterprise customers love the product but mid-market customers are churning after 18 months, this is not contradictory information. This is a segmentation issue affecting valuation. If management says pricing power is strong but customers report heavy discounting, this is not a research failure. This is a red flag about execution or competitive pressure.

Key point: Triangulation requires multiple methods, multiple cohorts, and fresh sourcing to turn contradictions into actionable signals.

How Are PE Teams Shifting From Access to Intelligence?

The expert network industry surpassed $2.5 billion in 2024, growing 9% after a few slower years. The industry has seen 16% compound annual growth over the last decade, with more than 120 firms operating in the sector. The growth is built on selling access, not intelligence.

The smartest PE deal teams have quietly rebuilt their research playbook around a different model. With median hold periods stretching to roughly 5.8 to 7 years in recent vintages, leading firms have moved from big report at entry to continuous, decision-linked research across the hold. They treat research as an operating system, not a one-off report. They run standing primary research programs tracking customer behavior, pricing power, competitive share, and NPS over time. They see early when an add-on is working, when churn is creeping, or when a new competitor is biting into a niche.

Instead of one huge commercial DD and then gut instinct, they run lighter but more frequent pulses tied to value-creation milestones and board calendars. Research has moved from is this a good company to buy to are we closing the gap to the value-creation case quarter by quarter.

Key point: Leading PE firms treat research as an operating system, running continuous pulses tied to value-creation milestones instead of one-off reports.

What Does This Mean for Your Next Deal?

Deals fail when information is difficult to verify. Buyers slow down, expand their diligence scope, and reassess their conviction. If you're underwriting a deal based on three expert calls where the information contradicts each other, you're collecting opinions and hoping one of them is right.

The question is not whether you need primary research. The question is whether you're getting finished intelligence or paying for access.

Are you designing research where data forces convergence across multiple methods and cohorts, or are you stacking anecdotes and calling it validation? Are you working with fresh, correctly profiled experts who speak to your specific question, or are you getting recycled names from the same database your competitors used? Are you receiving verified, decision-ready outputs going straight into your IC memo, or are you doing all the real research work yourself after the call ends?

When the deal team changed their valuation by 15%, it was not because we told them something unexpected. It was because we gave them the data to act on what they suspected with confidence.

Key point: The difference between access and intelligence is whether you're stacking anecdotes or forcing data convergence across multiple methods and cohorts.

Common Questions About Primary Research in M&A

Why do expert network calls produce contradictory information?

Expert networks are compensated for call volume, not accuracy. They recruit experts once and enroll them in multiple studies. Experts exaggerate expertise to earn fees. The model profits from quantity over quality, producing anecdotes instead of structured cohort data.

What is triangulated intelligence in due diligence?

Triangulated intelligence validates findings through convergence of evidence from multiple sources and methods. You design research with quantitative surveys for patterns, qualitative interviews for context, and channel checks for validation across multiple cohorts segmented by size, vertical, and tenure.

How much does contradictory expert research cost?

A typical 5-call expert network project costs $6,000 in vendor fees plus $2,250 to $4,500 in fully loaded analyst time (22.5 hours at $100 to $200 per hour). Total direct cost is $8,250 to $10,500. If 40% of calls are useless, effective cost per useful call is $2,750 to $3,500.

How do you design research from anecdotes to cohort data?

You need multiple methods (surveys, interviews, channel checks), multiple cohorts (current customers, lost customers, partners segmented by size and tenure), and fresh sourcing (experts not recycled through databases). Every datapoint gets fact-checked for role and relevance against external benchmarks.

What red flags emerge when churn data contradicts between sources?

Contradictory churn data often signals segmentation issues. Overall churn looks low but specific cohorts (mid-market, small customers) churn at higher rates after 18 to 24 months. NRR gets flattered by a few large enterprise accounts masking stagnant or shrinking spend in the long tail.

How do valuation multiples change when NRR is lower than reported?

Lower quality and durability of growth compresses ARR multiples. If true NRR adjusted for customer size and tenure is 5 to 10 points lower than reported, buyers move from the higher end of their target ARR multiple band to the middle, reducing the enterprise-value-to-ARR multiple by roughly 1 to 1.5 turns.

Why do PE firms treat research as an operating system instead of one-off reports?

With median hold periods stretching to 5.8 to 7 years, leading PE firms run continuous primary research programs tracking customer behavior, pricing power, and competitive share over time. They run lighter but more frequent pulses tied to value-creation milestones and board calendars instead of one huge commercial DD at entry.

What is the difference between access and intelligence in primary research?

Access means paying for introductions to experts, then doing all the research work yourself (vetting, scheduling, interviewing, note-taking, analysis). Intelligence means receiving verified, decision-ready outputs where data has been triangulated across methods and cohorts, fact-checked, and structured to go straight into IC memos.

Key Takeaways

  • Contradictory expert calls are a symptom of volume-based incentives, not accuracy-based outcomes

  • True cost per useful expert call is $2,750 to $3,500 when you count fully loaded analyst time and filter out useless conversations

  • Triangulation forces data convergence across multiple methods (surveys, interviews, channel checks) and cohorts (segmented by size, vertical, tenure)

  • Logo churn numbers are often technically true but economically misleading when concentrated in specific cohorts or back-loaded after 18 to 24 months

  • NRR adjusted for customer size and tenure is often 5 to 10 points lower than top-line figures when a few large accounts mask shrinking spend in the long tail

  • Lower growth quality compresses ARR multiples by roughly 1 to 1.5 turns, translating to 15% valuation reductions on deals

  • Leading PE firms treat research as an operating system with continuous pulses tied to value-creation milestones, not one-off reports at entry

media-contact-avatar
Contact details

Email for more information

info@woozleresearch.com

NEWSLETTER

Receive news by email

Press release
Company updates
Thought leadership

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply

You have successfully subscribed to the news!

Something went wrong!