Primary Research for Commercial Due Diligence: A Practical Guide for Consulting Teams
How management consulting teams can design and execute better primary research — expert interviews, surveys, and channel checks — to deliver faster, higher-quality commercial due diligence for PE clients.
Commercial due diligence lives or dies on primary research. The secondary data builds the baseline. The management presentations tell the story the target wants you to hear. But the expert calls, customer interviews, surveys, and channel checks — that's where you find out what's actually true.
If you're an engagement manager or senior consultant running CDD for private equity clients, you already know this. You also know that executing primary research well, under a 3-week timeline, while simultaneously building the deck and managing the client — is one of the hardest parts of the job.
This guide is about how to do that better. Not a primer on what CDD is — you know what it is. This is a practical framework for designing, executing, and synthesising primary research that actually tests the investment thesis, delivered under the constraints you actually face.
Why Primary Research Quality Matters More Than Ever
The deal environment has shifted in ways that raise the bar on CDD quality significantly.
Deal activity is surging, but with more scrutiny. Global M&A deal value reached an estimated $4.9 trillion in 2025 — up roughly 37% from 2024. Private equity transaction value hit nearly $2 trillion. But this growth has been skewed toward larger, higher-conviction transactions. Sponsors are deploying capital more selectively, which means every CDD needs to be more rigorous, not less.
Dry powder is at historic levels. Close to $1.1 trillion sits undeployed, reflecting the gap between robust deal activity and subdued fundraising. PE firms are under pressure to put capital to work — but the cost of getting it wrong is enormous. Inadequate due diligence can destroy value significantly and irreversibly, and one poor choice can drag down the performance of an entire portfolio for years.
Diligence windows keep shrinking. CDD work typically runs 2–8 weeks, often compressed toward the shorter end on competitive processes. Meanwhile, deal complexity is increasing — scope deals (entering new markets or adjacencies) now make up 60% of deals valued over $1 billion. Scope deals demand deeper primary research into unfamiliar markets, customers, and competitive dynamics.
The median PE holding period has stretched to 3.4 years — the longest in nearly a decade. Longer holds mean the CDD needs to get the forward-looking assessment right, not just the current snapshot. That's a primary research problem, not a desk research problem.
The bottom line: your PE clients are writing bigger cheques, holding assets longer, and expecting CDD that goes well beyond a market overview. Primary research is the differentiating element. Without it, you're delivering glorified desk research.
The Primary Research Toolkit for CDD
Most consulting teams default to expert interviews as their primary research method. Expert calls are important — but they're one tool in a toolkit that should include three distinct methods, each serving a different purpose.
Expert Interviews
What they're for: Depth, nuance, and hypothesis-testing. Expert interviews give you qualitative insight from people who have direct experience with the target's market, customers, competitors, or technology.
When to use them: Every CDD engagement. Expert calls are the backbone of primary research for diligence. They're particularly valuable for understanding competitive dynamics, validating market trends, assessing management credibility, and pressure-testing assumptions about growth potential.
Common profiles:
- Former executives at the target company
- Current/former executives at direct competitors
- Key customers of the target (especially for Voice of Customer work)
- Industry analysts and sector specialists
- Channel partners, distributors, or suppliers
- Regulatory or technical experts for specialised sectors
Typical volume: 15–30 interviews for a mid-market CDD, though the right number depends entirely on the scope and complexity of the deal. More on this below.
B2B Surveys
What they're for: Quantitative validation and statistical backing for qualitative findings. Surveys give you numbers — market share estimates, NPS benchmarks, switching intent data, pricing sensitivity — that expert calls alone can't provide with statistical rigour.
When to use them: Surveys are the most underused primary research method in CDD. They're particularly valuable when you need to:
- Quantify customer satisfaction or switching intent across a large customer base
- Validate market sizing assumptions with ground-level data
- Benchmark pricing against competitors
- Assess brand awareness or consideration in a target market
- Gather data from a larger sample than expert calls allow
Why teams underuse them: Survey design and fielding takes time. Most consulting teams don't have in-house survey expertise, and the turnaround time for a well-designed B2B survey can feel incompatible with a 3-week CDD. This is a solvable problem — particularly when you work with a research provider that handles end-to-end survey execution — but it requires planning the survey early in the engagement.
Channel Checks
What they're for: Ground-truth verification of the target's market position, pricing, product quality, and reputation. Channel checks involve systematic outreach to distributors, resellers, customers, or suppliers.
When to use them: Channel checks are especially valuable for consumer-facing businesses, distribution-dependent sectors (industrials, building products, food & beverage), and any deal where the target's go-to-market claims need independent verification. They're also useful for software companies — checking with implementation partners, resellers, and end-users.
What they reveal: Channel checks often surface insights that expert calls miss — shelf space dynamics, pricing compliance, competitive substitution trends, service quality perceptions, and inventory movements.
The Right Mix
The optimal blend depends on the deal. A few rules of thumb:
| Deal Type | Expert Interviews | Surveys | Channel Checks |
|---|---|---|---|
| B2B software platform | Heavy (competitors, customers, analysts) | Moderate (customer satisfaction, NPS) | Light (implementation partners) |
| Healthcare services roll-up | Heavy (operators, referral sources, payors) | Moderate (physician/patient satisfaction) | Light to moderate |
| Industrial distribution | Moderate | Moderate (customer base survey) | Heavy (distributor and end-user checks) |
| Consumer brand / retail | Moderate | Heavy (brand awareness, purchase intent) | Heavy (retail checks, pricing audits) |
| Niche B2B services | Heavy (small expert universe, each call matters) | Light (small addressable base) | Moderate |
Designing a Primary Research Programme for CDD
This is where most teams either get it right or waste enormous amounts of time. The difference between a CDD that produces a clear, defensible investment recommendation and one that produces a data dump almost always comes down to research design.
Step 1: Start With the Investment Thesis, Not the Market
The single most important principle: your primary research programme should be designed to test the PE client's investment thesis, not to produce a general market overview.
Before you schedule a single expert call, you need clarity on:
- What is the thesis? Why does the PE firm believe this target is attractive? What has to be true for the deal to work?
- What are the key assumptions? Market growth rate? Customer retention? Competitive moat? Pricing power? Cross-sell potential?
- What would kill the deal? What findings would cause the PE firm to walk away?
- What does the management team claim? And which of those claims are you most sceptical about?
Every research activity — every expert call, every survey question, every channel check — should map back to one of these questions. If it doesn't, it's not adding value.
Step 2: Define Your Research Questions
Translate the investment thesis into 5–8 specific, testable research questions. These become the organising framework for your entire primary research programme.
Example: For a PE acquisition of a mid-market vertical SaaS company, the research questions might be:
- Is the target's core market growing at the rate management claims (15% CAGR)?
- How sticky is the target's customer base? What's the real churn rate and why do customers leave?
- How does the target's product compare to its top 3 competitors on functionality, pricing, and customer satisfaction?
- Is the target's pricing power sustainable, or is competitive pressure compressing margins?
- What's the realistic TAM for the target's expansion into [adjacent market]?
- How do customers and partners perceive the target's technology and product roadmap?
- What regulatory or compliance changes could impact the target's market over the next 3–5 years?
Step 3: Build Expert Profiles Mapped to Research Questions
Don't recruit generic "industry experts." Define the specific profiles you need based on the research questions you're trying to answer.
For each research question, ask: Who has direct, first-hand knowledge that would allow them to answer this?
Be specific about:
- Role: C-level? VP-level? Functional lead?
- Company type: Competitor? Customer? Supplier? Regulator?
- Recency: Current role, or within the last 2 years?
- Geography: Must they have experience in the target's specific markets?
- Sector depth: Do they need sub-segment expertise (e.g., not just "healthcare IT" but "revenue cycle management for mid-size hospital systems")?
Good expert profiling is the highest-leverage activity in CDD primary research. A well-defined expert profile dramatically improves the hit rate and reduces wasted calls.
Step 4: Write Discussion Guides That Test Hypotheses
The discussion guide is the most underinvested element of CDD primary research. Too many teams use generic templates or loosely structured topic lists. The result: 45-minute conversations that produce broad, unfocused transcripts and very little usable insight.
Principles for effective discussion guides:
- Organise around hypotheses, not topics. Instead of a section on "market trends," structure the guide to test the specific claim that "the target's core market is growing at 15% CAGR." Frame questions that would confirm or disconfirm that claim.
- Use probing sequences. Don't ask a single question and move on. Design 2–3 follow-up probes for each key area. The first answer is often surface-level; the insight comes in the follow-up.
- Anchor in specifics. "What do you think of Company X's competitive position?" is a weak question. "If you were evaluating Company X versus [Competitor A] and [Competitor B] for [specific use case], which would you choose and why?" is much stronger.
- Include calibration questions. Ask experts to rate things on scales, estimate percentages, or rank competitors. This gives you semi-quantitative data you can aggregate across multiple interviews.
- Tailor guides by expert type. A competitor executive, a customer, and a former employee of the target should all get different guides, even if they're addressing the same research questions.
Step 5: Plan the Survey Early
If you're going to include a survey (and in most CDDs, you should at least consider it), the design needs to start in the first few days. Survey design, programming, sample sourcing, fielding, and analysis take time — often 10–14 days from start to finish for a B2B survey. On a 3-week CDD, that means starting survey work almost immediately.
Surveys work best when they:
- Quantify something that expert calls have revealed qualitatively (e.g., "experts say customer satisfaction is declining — let's measure it")
- Reach a broader sample than you can cover through expert calls alone
- Provide data that can be presented as statistically grounded evidence in the final deck
Common Pitfalls — and How to Avoid Them
1. Treating CDD Primary Research as a Box-Ticking Exercise
Too many teams treat due diligence as a box-ticking exercise. They schedule 25 expert calls because that's what the project plan says, not because they've designed a research programme that requires exactly that many. The result is a pile of transcripts with no clear thread connecting them to the investment thesis.
Fix: Every call should have a clear purpose mapped to a specific research question. If you can't articulate what you're hoping to learn from a particular expert that you haven't already learned, don't schedule the call.
2. More Calls ≠ Better Diligence
This is related but distinct. Volume without structure produces noise. Fifteen well-designed expert interviews with carefully profiled experts, supported by a targeted survey, will almost always produce better insight than thirty loosely structured calls with whoever was available.
Fix: Invest more time in expert profiling and discussion guide design. Invest less time in maximising call volume.
3. Generic Discussion Guides
A reusable template is fine as a starting point, but every CDD should have a discussion guide tailored to the specific deal's hypotheses and questions. Using last deal's guide without significant modification is one of the fastest ways to produce low-value research.
Fix: Build discussion guides from the research questions, not from a template. Review and iterate them after the first 3–4 interviews as you learn what's working and what's not.
4. Expert Sourcing Bottlenecks in Niche Verticals
Finding relevant experts for niche sectors or geographies is one of the most common failure points. Junior team members spend hours searching LinkedIn and hitting dead ends, burning time that could be spent on analysis.
Fix: This is where external research partners add the most value. Whether you use an expert network or a done-for-you research provider, outsourcing expert recruitment for hard-to-reach profiles is almost always more efficient than doing it internally.
5. Failing to Triangulate
The most valuable CDD findings usually emerge from contradictions — where what management says, what customers say, what competitors say, and what the data shows don't line up. Teams that treat each expert call as an isolated data point miss these signals.
Fix: Build triangulation into your synthesis process. After every batch of interviews, cross-reference findings against each other and against management's claims. Contradictions aren't problems — they're where the real insights live.
6. Data Overload Without Actionable Synthesis
Teams gather massive amounts of data — transcripts, survey results, market reports, management materials — but struggle to distil it into a clear investment recommendation. The final deck becomes a data dump rather than a decision support tool.
Fix: Synthesis should happen continuously, not at the end. After each interview day, update your running view on each research question: what's confirmed, what's challenged, what's still unresolved. By the time you sit down to build the final deck, you should already know your recommendation.
Expert Networks vs. Done-for-You Research Providers
This is a structural choice that most consulting teams don't think about carefully enough, largely because expert networks have been the default for so long.
The Self-Serve Expert Network Model
Traditional expert networks — GLG, AlphaSights, Third Bridge, Guidepoint, Dialectica, and others — connect you with subject-matter experts for paid consultations. The expert network industry reached approximately $3 billion in 2025, growing at around 12% annually. Hourly rates typically range from $300 to $1,200, with volume discounts for larger engagements.
In this model, you do the work:
- You define the expert profiles
- You write the discussion guides
- You schedule and conduct the calls
- You take notes or arrange transcription
- You synthesise the findings
This model works well for hedge fund analysts who have flexible timelines and want direct, unfiltered access to experts. It works less well for consulting teams running a 3-week CDD, where the same junior consultants scheduling calls and conducting interviews are also building the financial model, writing the deck, and managing the client relationship.
The Done-for-You Research Model
Done-for-you research providers handle the entire primary research process end-to-end. You brief them on what you need to know — the research questions, the hypotheses to test, the expert profiles you need — and they deliver finished, synthesised research outputs.
In this model, the provider does the work:
- They design discussion guides based on your research questions
- They recruit and screen experts
- They conduct the interviews
- They synthesise findings into actionable outputs
- They can also design, field, and analyse surveys
This model is built for consulting teams and deal teams who need research done fast and don't have the bandwidth to execute it themselves. It separates the intellectual work (what questions to ask, what the answers mean, how to frame the recommendation) from the operational work (finding experts, scheduling calls, running interviews, synthesising transcripts).
When to Use Which
| Factor | Self-Serve Expert Network | Done-for-You Provider |
|---|---|---|
| Team bandwidth | You have dedicated resource for research execution | Team is stretched across multiple workstreams |
| Timeline | 4+ weeks; flexible scheduling | 2–4 weeks; tight deadlines |
| Research complexity | Straightforward expert profiling | Niche verticals; multi-geography; complex profiling |
| Output needed | Raw expert access; you handle synthesis | Synthesised findings; ready to integrate into deliverables |
| Engagement frequency | Occasional research needs | Running multiple CDDs per quarter |
| Survey component | Not typically included | Full survey design, fielding, and analysis |
Many consulting teams use both — expert networks for straightforward calls where they want direct access, and done-for-you providers for the heavy-lift research programmes where they need speed, scale, and synthesised outputs.
Using AI to Accelerate Synthesis
AI is reshaping the CDD research workflow, but it's important to be precise about where it adds value and where it doesn't.
Where AI Helps
- Transcript synthesis: Instead of reading 20+ transcripts sequentially, AI can extract key themes, identify consensus and dissent, and surface relevant quotes across an entire corpus of interviews. This frees analysts from mechanical review and reallocates time to higher-value analysis.
- Pattern detection: AI can identify patterns across large bodies of qualitative data that humans might miss — recurring concerns, competitive mentions, pricing references.
- Contradiction surfacing: AI can flag where different experts disagree, or where expert testimony contradicts management's claims. These contradictions are often the most important findings.
- Secondary research acceleration: AI can rapidly process market reports, regulatory filings, and news to build the secondary research baseline faster.
Where AI Falls Short
- It doesn't replace expert judgment. AI can tell you that 7 out of 15 experts expressed concern about the target's pricing sustainability. It can't tell you whether that concern is material enough to change the deal recommendation.
- It can produce errors. Deloitte's 2025 State of Generative AI report found that 35% of organisations hesitate to adopt GenAI because of error rates. In a CDD context, a misinterpreted transcript or fabricated data point can be genuinely damaging.
- It doesn't design research. AI can't determine which experts to recruit, what hypotheses to test, or how to structure a discussion guide. Research design remains a human, intellectually demanding task.
- It doesn't make deal recommendations. The final go/no-go recommendation requires judgment, context, and an understanding of the PE client's investment criteria that AI cannot provide.
The right mental model: AI is a research accelerator, not a research replacement. It restructures how information is processed inside the diligence workflow — and firms using it well report due diligence time reductions of up to 70%. But the strategic and interpretive layers remain human.
From Data to Investment Recommendation
The final — and arguably most important — stage of CDD primary research is synthesis: translating a body of evidence into a clear, defensible investment recommendation.
The Triangulation Framework
For each key research question, cross-reference four sources:
- What management says — The target's own claims about market position, growth trajectory, customer satisfaction, and competitive advantages.
- What customers say — Direct customer feedback on satisfaction, switching intent, competitive alternatives, and willingness to pay.
- What competitors and industry experts say — External perspectives on market dynamics, the target's reputation, and competitive threats.
- What the data shows — Survey results, financial data, secondary research, and market analytics.
Where all four sources align, you have high-confidence findings. Where they diverge, you have the most important areas for deeper investigation — or the red flags that might change the recommendation.
Structuring the Output
The best CDD teams deliver synthesis, not data dumps. That means:
- Leading with the recommendation. Go/no-go, with conditions and caveats clearly stated.
- Organising findings around the investment thesis, not around the research methods. The PE client doesn't care how many expert calls you ran — they care whether the thesis holds up.
- Quantifying where possible. "Most experts were concerned about competitive pressure" is weak. "11 of 15 experts rated the competitive threat as significant or critical, and the customer survey showed a 23% consideration rate for the primary competitor" is actionable.
- Highlighting risks and uncertainties explicitly. Don't bury the bad news. PE clients value CDD teams that challenge the management plan with independent data, not teams that tell them what they want to hear.
- Linking findings to value creation. The CDD shouldn't just assess the current state — it should inform the 100-day plan and post-acquisition priorities.
What the Best CDD Teams Do Differently
After working with dozens of consulting teams on their primary research programmes, the patterns are clear. The teams that consistently produce the best CDD work share several characteristics:
- They start with hypotheses, not data collection. Every research activity is designed to confirm or disconfirm a specific assumption about the target.
- They invest disproportionately in research design. Expert profiling, discussion guide construction, and survey design get serious attention — not just the first 30 minutes of the project.
- They combine methods deliberately. Expert calls for depth. Surveys for breadth. Channel checks for ground truth. Each method has a defined role in the research programme.
- They separate research execution from research interpretation. The intellectually demanding work — defining what to ask, interpreting what it means, building the recommendation — stays with senior team members. The operationally intensive work — finding experts, scheduling, conducting calls, synthesising transcripts — is outsourced.
- They triangulate aggressively. They don't just report what they heard. They cross-reference every finding against multiple sources and pay special attention to contradictions.
- They challenge management claims with independent evidence. The PE client is paying for an independent assessment, not a validation exercise.
- They synthesise continuously, not at the end. By the time they build the final deck, the recommendation is already clear because they've been updating their view after every research batch.
- They use AI where it saves time, not where it replaces thinking. Transcript synthesis, pattern detection, and secondary research acceleration — yes. Research design, expert selection, and deal recommendations — no.
- They link CDD findings directly to value creation. The best CDDs don't just assess risk — they inform the post-acquisition playbook.
- They build repeatable processes. Expert profiling templates, discussion guide frameworks, synthesis protocols — the best teams systematise their approach so quality is consistent across engagement teams.
Getting Started
If you're running CDD engagements and looking to improve the quality and speed of your primary research — or simply looking to offload the operational burden of research execution so your team can focus on analysis and synthesis — we should talk.
Woozle Research is a done-for-you primary research provider built for investment professionals and the consulting teams that support them. You brief us on the deal, the thesis, and the questions you need answered. We design the research programme, recruit the experts, conduct the interviews, field the surveys, and deliver synthesised, actionable outputs — ready to integrate into your CDD deliverables.
No scheduling calls. No writing discussion guides from scratch. No synthesising 20 transcripts at midnight before the client meeting.
Get in touch to discuss how we can support your next engagement.