The AI Hype in Primary Research Is Real, But So Are the Limits
A practical guide for PE deal teams, hedge fund analysts, and corporate strategy professionals on the limitations of AI for primary research
Every platform vendor in investment research is racing to slap "AI-powered" on their product. Generative AI is being positioned as revolutionary for how investment teams conduct research. AlphaSense is marketing "AI-Led Expert Calls." Anthropic just launched new customisation offerings targeting the finance industry, with plugins tailored for financial analysis, equity research, private equity and wealth management. And artificial intelligence has moved from competitive edge to essential requirement faster than most expected. Nearly 95% of VC and PE firms now use AI in investment decisions and deal evaluation.
The problem is that almost all of this content is written by vendors selling AI tools. None of it comes from people who actually run hundreds of primary research projects a year and can tell you, with no agenda, where AI helps and where it doesn't.
This guide is that honest assessment. We run expert interviews, B2B surveys, and channel checks for PE deal teams, hedge fund analysts, and corporate strategy groups every day. Here's what we've learned about what to automate and what to protect.
What AI Actually Does Well in Primary Research
Credit where it's due. AI is changing several parts of the research workflow, and teams that aren't using it for these tasks are falling behind.
Transcript search and pattern extraction. If you've completed 15 expert calls on a deal and need to find every mention of a specific competitor, pricing trend, or churn driver, AI is far faster than manual review. Platforms like AlphaSense aggregate broker research, expert call transcripts, SEC filings, and news, and their summarisation and smart search tools make sure you never miss a critical data point during due diligence. This used to take an associate a full day. Now it takes minutes.
Earnings call and filing summarisation. Generative summaries that produce analyst-style briefings with document citations, spanning equities, fixed income, macro research, and expert calls, are now standard. If your team is still reading every earnings transcript cover to cover, you're burning hours that could go toward actual analysis.
Secondary research and landscape mapping. Getting smart on a new sector, building a competitive landscape, or pulling together a market map before you start primary research? AI tools are excellent for this. They cut the time it takes to ramp up on a new industry or target company. Use them to build your initial hypothesis and figure out what questions you can't answer from public sources. That's where primary research begins.
Document analysis in data rooms. AI can now process thousands of pages in minutes, pulling out key data points and flagging details that take analysts days to work through. From a company's confidential information memorandum, AI platforms can quickly identify financial metrics, growth rates, and customer concentration, while cross-referencing data across multiple documents to catch inconsistencies or red flags that a rushed human review would miss.
Scheduling and logistics. Expert matching, calendar coordination, compliance screening. These are operational tasks that benefit enormously from automation. No argument here.
Where AI Falls Short, and Human-Led Research Can't Be Replaced
This is the part the AI vendors don't write. And it matters most for anyone making investment decisions based on primary research.
Designing the right questions. The quality of an expert call is set before the call even starts, by the questions you ask. AI can generate a generic discussion guide in seconds. But generic questions get you generic answers.
The gap between a useful expert call and a wasted hour comes down to question design that reflects your specific investment thesis, the target's business model, and the particular blind spots in your secondary research. That takes someone who understands both the deal context and the industry, and who can push experts past surface-level answers into decision-grade insight.
AI doesn't know what you don't know. A skilled researcher does.
Selecting and vetting the right experts. AI can search a database and match keywords on a LinkedIn profile. But choosing the right expert for a specific research question, and more importantly, screening out experts who have compliance issues, outdated knowledge, or an axe to grind, takes human judgment.
The most useful expert for your project may not be the most obvious one. They might be a mid-level operator at a competitor rather than a C-suite executive. They might be someone who left the industry 18 months ago and can speak freely. These are judgment calls that a matching algorithm keeps getting wrong.
Real-time probing during expert interviews. This is the biggest hole in the "AI-led expert call" pitch. An expert says something unexpected. A throwaway comment about a customer leaving. A hesitation when asked about pricing power. A contradiction with what you heard from a previous expert. A skilled interviewer catches that in the moment and follows the thread.
AI can transcribe the call. It can summarise the call afterward. But it can't run the kind of dynamic, adaptive questioning that turns a decent expert call into a breakthrough insight. The most valuable moments in primary research are unscripted, and that is exactly what AI cannot replicate.
Triangulating conflicting data across multiple calls. You run 12 expert calls on a target. Eight experts say customer retention is strong. Two say it's declining. Two give ambiguous answers. What do you do?
AI can flag the discrepancy. But figuring out why experts disagree, understanding which ones had the most relevant vantage point, which were biased by their role, and which data points carry the most weight for your investment thesis, is a judgment exercise that no model can reliably handle. AI brings real advantages in speed and precision, but human oversight is still essential here.
Synthesising calls into a clear investment answer. This is the step that separates research from insight. As one viral LinkedIn post recently put it, most expert calls are "messy" with smart people and great insights, but teams end up with disorganised notes and no structured output for an IC memo.
AI can produce a summary of each individual call. What it can't do is pull together 10 to 15 calls into a single, opinionated answer to the question your investment committee actually needs answered: Is this company's competitive moat real? Is the revenue quality as strong as the CIM suggests? Will this market grow at the rate management claims?
Synthesis means weighting evidence, resolving contradictions, identifying what's missing, and framing the answer in terms of investment risk and return. This is the core intellectual work of primary research, and it's the part that should never be handed to a language model.
A Practical Framework: Automate the Workflow, Protect the Judgment
Here's a simple way to think about it. Map every step of your primary research process against two axes: volume of repetitive work and degree of judgment required.
| Task | Automate with AI? | Why / Why Not |
|---|---|---|
| Secondary research & landscape mapping | Yes | High volume, pattern-matching task. AI excels here. |
| Transcript search & keyword extraction | Yes | Saves hours of manual review across large transcript libraries. |
| Earnings call & filing summarisation | Yes | Routine summarisation of structured public documents. |
| Scheduling & compliance logistics | Yes | Operational task with clear rules. Automate fully. |
| Discussion guide & question design | No | Requires deal-specific context and thesis understanding. |
| Expert selection & vetting | No | Judgment-heavy: relevance, bias, and compliance risks. |
| Conducting the expert interview | No | Real-time probing and adaptive questioning are irreplaceable. |
| Cross-call triangulation | No | Resolving contradictions requires human reasoning and context. |
| Synthesis into investment conclusions | No | The core intellectual output. Must be human-driven. |
The pattern is clear: automate everything upstream and downstream of the expert interaction itself. Protect the human judgment at the core.
What This Means for Your Team
The AI landscape in investment management is maturing beyond a "one size fits all" approach. While general-purpose AI tools remain valuable for everyday tasks, investment firms are finding that purpose-built solutions deliver more strategic value for mission-critical workflows. The same applies to primary research.
If your team is running 10+ expert calls per deal and still manually synthesising notes into IC memos, AI transcript tools will help at the margins, but they won't fix the real bottleneck. Due diligence is still the most labour-intensive phase of investment. Before AI, analysts spent 90% of their time on data processing and only 10% on strategic judgment. AI's promise is to flip that ratio. But for primary research, flipping that ratio takes more than better software. It takes someone who knows how to design, execute, and synthesise a research programme end to end.
There are three models investment teams can choose from today:
DIY with AI tools. You use expert networks (GLG, AlphaSights, Third Bridge) for sourcing and conduct calls yourself, layering in AI tools for transcription and search. You still own the full workload of question design, expert vetting, interviewing, and synthesis.
AI-augmented platforms. You use platforms like AlphaSense or similar that offer searchable transcript libraries and AI summaries. This works well for monitoring and broad research, but doesn't replace bespoke primary research on a specific deal target.
Done-for-you primary research. You brief a specialist team on what you need to know, and they handle everything: expert recruitment, question design, interviews, and synthesis. They deliver finished, decision-ready research. AI is used where it helps (transcript analysis, secondary research), but the judgment-intensive work is done by experienced researchers.
The right model depends on your team's capacity, deal volume, and where you want your analysts spending their time. But one thing doesn't change: the quality of your investment decisions is only as good as the primary research behind them. No AI tool changes that.
The Short Version
AI is a real accelerant for parts of the primary research workflow. Use it aggressively for search, summarisation, and logistics. But don't confuse faster transcripts with better research.
The tasks that actually determine whether your primary research gives you a clear, defensible investment answer (question design, expert selection, real-time interviewing, cross-call triangulation, and synthesis) are judgment-intensive, context-dependent, and human. Automate around them. Don't try to automate through them.
The teams that get this balance right will move faster and make better decisions. The teams that over-index on AI tooling will move faster and wonder why their research still doesn't give them conviction.
Every platform vendor in investment research is racing to slap "AI-powered" on their product. Generative AI is being positioned as revolutionary for how investment teams conduct research. AlphaSense is marketing "AI-Led Expert Calls." Anthropic just launched new customisation offerings targeting the finance industry, with plugins tailored for financial analysis, equity research, private equity and wealth management. And artificial intelligence has moved from competitive edge to essential requirement faster than most expected — nearly 95% of VC and PE firms now use AI in investment decisions and deal evaluation.
But here's the problem: almost all of this content is written by vendors selling AI tools. None of it is written by people who actually run hundreds of primary research projects a year and can tell you, honestly, where AI helps and where it doesn't.
This guide is that honest assessment. We run expert interviews, B2B surveys, and channel checks for PE deal teams, hedge fund analysts, and corporate strategy groups every day. Here's what we've learned about what to automate — and what to protect.