The Question Framework That Moves Conviction (Before You Ever Call an Expert)
The Question Framework That Moves Conviction (Before You Ever Call an Expert)
TL;DR: Research fails at the question stage. 40% of expert calls produce nothing useful. The real cost is $2,750+ per usable answer when you include analyst time. This framework ties every question to a decision before the call starts.
Core answer:
• Start from your decision (size up, hold, exit), not the topic
• Map hypotheses to measurable signals you need to check
• Ask for facts first, opinions only when they move risk
• Use neutral phrasing with concrete timeframes
• Flow from broad context to stress tests
• Link every question to a cell in your model
I have run primary research for investors for ten years. The best decisions come from analysts who write their questions before they talk to experts. The worst come from teams who confuse access with insight.
Bad questions cost more than bad experts.
Why Research Fails at the Question Stage
One-third of academic research time goes to finding the right question. Studies show 3 out of 10 published papers needed major rewording of their core question. In investment work, where you are paying $1,200 per call and burning analyst hours on logistics, bad questions blow positions.
I have watched funds spend weeks on expert calls. Beautiful transcripts. Zero conviction. The experts were fine. No one defined what would change the position before the calls started.
Research breaks when you design questions for coverage instead of decisions.
Key insight: Coverage questions produce transcripts. Decision questions produce conviction.
How to Build Questions That Move Capital
Every question earns its place by moving a decision. Start from the investment memo. Work backward.
Step 1: Start from the decision, not the topic
Define your decision. What variable are you trying to move? Size up, hold, or exit? 20% IRR or pass?
Write 2 to 3 explicit hypotheses. Churn is structurally below 5%. Price increases will stick. New competitor is taking 10 to 15% mid-market share.
A good question is one where a different answer would change your sizing or direction. If it would not alter the memo, cut it.
Key insight: Questions that survive are questions that move capital.
Step 2: Break hypotheses into measurable signals
Map each hypothesis to concrete, observable signals.
Volume and mix: order trends, pipeline shape, cohort behavior, implementation timelines.
Price and margin: list vs. realized pricing, discounting behavior, elasticity, input costs.
Share and competition: win rates, head-to-head battles, switching patterns.
Execution risk: implementation failures, churn drivers, salesforce quality, product gaps.
Each signal gets one question that forces specificity. Numbers. Ranges. Recent examples. Behavior. Not vibes. "Walk me through a recent deal" beats "How is demand?" The first anchors reality.
Key insight: Measurable signals are facts. Everything else is noise.
Step 3: Design for facts first, opinions second
The sequence is facts, then interpretation, then prediction. Always in that order.
Start with "show me" questions:
In the last 3 to 6 months, what happened to your average monthly spend with Vendor X?
How many serious competitors do you see in RFPs, and which ones?
What percentage of customers in your patch renewed last cycle?
Layer in "why" and "so what":
What were the top 2 to 3 reasons the last customer you lost switched?
What would make you pay more for this product?
Opinion questions only survive if they map to risk or edge. Key-man risk. Culture. Execution. Otherwise, cut them.
Key insight: Facts are verifiable. Opinions matter when they move risk.
Step 4: Make questions investment-grade
Good questions are neutral. "How has churn changed?" Not "Churn is low, right?"
Use concrete time frames. Last quarter. Last 12 months. Use specific cohorts. Your top 10 customers. Mid-market accounts. Mix quantitative and qualitative. Ranges, percentages, counts, examples, narratives.
Run three checks:
1. Would a strong answer change your position sizing or thesis?
2. Does an honest expert answer from experience, not guesses?
3. Is it free of embedded conclusions or bias?
Rewrite or cut anything that fails.
Key insight: Investment-grade questions pass all three checks.
Step 5: Structure your flow
Expert calls and qualitative work: Open broad to surface unknowns. "Walk me through how purchasing decisions for this product are made in your organization." Move into structured blocks tied to your memo. Demand. Pricing. Competition. Product. Risk. Close with stress tests and outliers. "What would need to happen for you to materially cut spend or switch?"
Surveys: Start with tight screeners. Only the right personas get through. Role, budget authority, product usage, region. Build a spine of quantitative questions for comparability. Likert scales, ranges, multiple-choice. Add targeted open-ends where nuance matters.
Key insight: Structure follows your memo, not the expert.
Step 6: Map every question to your model
Each question maps to a cell in your model.
This question feeds churn assumptions.
These three inform pricing power and gross margin.
This block maps to TAM/SAM penetration and growth runway.
A question with no clear destination in your model is filler. A question that tightens a range or changes a key assumption moves conviction.
Key insight: If the answer does not touch your model, delete the question.
How This Works in Practice
I worked on a SaaS acquisition. The PE buyer had solid headline metrics. Strong ARR growth. Low logo churn. Expanding ACVs. Their expert networks had delivered positive calls that endorsed the story.
Their question was simple. Are we overpaying if we lean into this NRR story?
We built a research plan around three levers. Churn. Expansion. Pricing power. Customer surveys across multiple cohorts. Interviews with current customers, lost customers, key partners. Channel and competitor checks with implementation partners and former sales leaders.
We fact-checked every datapoint for role and relevance. We tested responses for internal consistency and against external SaaS benchmarks.
The pattern expert networks missed: logo churn was technically true but economically misleading.
Churn was back-loaded and concentrated. Overall annual logo churn matched the reported figure. Smaller and mid-market cohorts were churning or materially downgrading after 18 to 24 months at rates higher than management disclosed.
Net expansion was flattered by large accounts. A handful of enterprise customers with significant upsell masked stagnant or shrinking spend in the long tail. True NRR adjusted for customer size and tenure was 5 to 10 points lower than headline.
Pricing power was weaker than pitched. Customers reported heavy discounting on renewal and aggressive offers from newer SaaS competitors.
Traditional expert calls missed this. They skewed toward friendly references and ex-insiders. They produced anecdotes that sounded good but never became cohort analysis.
We translated findings into the model. Reduced forward ARR growth assumptions by several points. Reset sustainable net retention rate down 5 to 10 percentage points. The buyer moved from the higher end of their target ARR multiple band to the middle.
15% reduction in valuation.
The difference: instead of optimistic calls confirming the story, the buyer got a grounded view of customer behavior and repriced risk.
Key insight: Good questions turned anecdotes into cohort analysis. Moved valuation 15%.
What Bad Questions Cost
Funds see the vendor invoice. They miss the hidden tax.
Typical analyst all-in cost: $200k to $400k per year. Salary, bonus, benefits, seat, data, overhead. At 2,000 working hours per year, that is $100 to $200 per hour fully loaded.
A simple expert network project with 5 calls. Analyst spends 4.5 hours per call. Defining the brief. Scheduling. Sitting on the call. Writing notes. Extracting key points.
5 calls equals 22.5 hours. At $100 to $200 per hour, analyst time on one $6k project is $2,250 to $4,500.
Add vendor fees of $6,000. Total cost: $8,250 to $10,500.
40% of calls are useless or off-target. Only 3 of 5 calls materially move the thesis.
Cost per useful call: $2,750 to $3,500.
Those 22.5 hours could have gone to deeper modeling. Wider watchlist. Better work with PMs and risk.
Funds say "It's $1,200 a call." The truth is closer to $2,500 to $3,500 per useful insight when you include team time and failure rate.
Key insight: Bad questions double the real cost of research.
Where AI Stops and Research Starts
AI processes information. Primary research creates it.
Three things AI does not replicate:
Getting people to say what they think. High-stakes experts and operators do not talk the way documents read. Getting past PR-safe answers requires reading the room, adjusting tone, pushing on contradictions, and knowing when silence does the work.
Interpreting messy reality. Operators disagree. They misremember. They contradict each other. Sometimes they lie. Turning that into an investment view means judging who to weight, which anecdote is an outlier, and when a contradiction is the signal.
Deciding what is investment-grade. The hard part of primary research is not asking or transcribing. The hard part is deciding, with a straight face, "This is solid enough to change a position."
AI will eat document review, pattern recognition, and mechanical analysis of existing data. It will not replace the human work of persuading the right people to share what they know, interrogating them in real time, and judging what is true, what matters, and what should move capital.
Key insight: AI processes. Humans persuade, interrogate, and judge.
Common Questions About Question Design
How many questions should go into an expert interview guide?
15 to 25 questions for a 60-minute call. Fewer if they are broad and open-ended. More if they are tight and factual. The guide is not a script. Good interviews follow threads. The guide keeps you tied to the memo when experts go off track.
When should I use surveys instead of interviews?
Surveys work when you need statistical confidence across many respondents. Market sizing. Penetration rates. Pricing elasticity. Preference rankings. Interviews work when you need depth, context, or the ability to follow up in real time. Use both when the decision requires breadth and depth.
How do I avoid leading questions in surveys?
Test your phrasing. Replace "How satisfied are you?" with "How would you rate your experience?" Replace "Do you agree the product is improving?" with "Has the product improved, stayed the same, or declined over the past 12 months?" Randomize answer order. Run a pilot with 5 to 10 responses. Watch for clustering or confusion.
What if the expert goes off-script?
Follow the thread if it is tied to your hypotheses. Cut it short if it is not. Polite redirect: "That is interesting. Let me come back to something you said earlier about pricing." Your job is not to be polite. Your job is to move conviction.
How do I know when I have enough primary research?
Stop when additional calls or surveys stop changing your view. If the last 3 interviews confirmed what you already learned, you are done. If they introduced new contradictions or shifted a key variable, keep going. Research is not coverage. Research is conviction.
How do I handle experts who give vague or evasive answers?
Anchor them with specifics. "You mentioned pricing pressure. In the last quarter, by how much did your average deal size change?" Follow up once. If they are still vague, note it and move on. Vagueness is data. It tells you they do not have firsthand experience or are uncomfortable sharing.
Should I share my thesis with the expert before the call?
No. Your thesis biases their answers. Frame the call as learning, not confirming. "We are trying to understand how pricing has evolved in this market" is better than "We think pricing is under pressure. Do you agree?"
How do I fact-check what experts tell me?
Cross-reference across multiple experts. Compare to public disclosures, filings, and benchmarks. Test internal consistency. If an expert says churn is low but also says half their customers renegotiate every year, one of those is wrong. Verify identity and role. An expert who claims to run procurement but does not know basic contract terms is not credible.
Key Takeaways
• Research fails at the question stage. 40% of expert calls produce nothing because no one defined what would change the position.
• Start from your decision, not the topic. Write 2 to 3 explicit hypotheses. Map each to measurable signals. Build questions that would change sizing or direction if answered differently.
• Ask for facts first, opinions second. "Show me" questions anchor reality. Opinion questions only survive if they move risk.
• Investment-grade questions pass three tests: Would a strong answer change the thesis? Does an expert answer from experience? Is it free of bias?
• Map every question to your model. If the answer does not tighten a range or change an assumption, delete the question.
• Bad questions double your real cost. The true cost per useful insight is $2,500 to $3,500 when you include analyst time and the 40% failure rate on calls.
• AI processes information. Primary research creates it through persuasion, real-time interrogation, and judgment about what moves capital.
What framework are you using to decide which questions deserve analyst time?

