How Long/Short Equity Analysts Generate Alpha Through Primary Research
Most analysts burn weeks each quarter on logistics while paying research prices for access. The process breaks down at vendor selection, question design, and verification. This guide shows you how to fix it.
You need an edge before the market prices it in.
That edge comes from proprietary intelligence that identifies fundamental inflections ahead of quarterly filings. Primary research gives you that window, but only if you run it correctly.
Most analysts burn weeks each quarter on logistics while paying research prices for access. The process breaks down at vendor selection, question design, and verification. This guide shows you how to fix it.
Start With Coverage Review and Gap Analysis
Primary research works when you know exactly what question you need answered.
Run a coverage review across your portfolio. Identify where your conviction is weakest and where the next inflection will come from. Focus on four areas: financial quality, operational efficiency, competitive dynamics, and management effectiveness.
Financial quality gaps show up when you cannot reconcile reported margins with channel feedback, or when working capital movements do not match the growth narrative. If the CFO says inventory is clean but your supply chain contacts suggest otherwise, that is a gap worth investigating.
Operational efficiency questions emerge around unit economics, capacity utilisation, and cost structure. You need to know if the company can scale without margin compression, and whether operational improvements are structural or temporary.
Competitive dynamics require ground truth. Market share claims in earnings calls often conflict with what distributors and customers actually see. You need independent verification of win rates, pricing power, and customer switching behaviour.
Management effectiveness is harder to quantify but critical for position sizing. You want to know if the team can execute, if they have credibility with customers and suppliers, and whether their guidance has historically been reliable.
Map these gaps against upcoming catalysts. Scheduled catalysts include earnings, product launches, regulatory decisions, and contract renewals. Potential catalysts include management changes, competitive moves, and macro shifts that could accelerate or derail the thesis.
Time your primary research to land 2-3 weeks before the catalyst. That gives you enough lead time to adjust positioning without sitting on stale data.
Design Questions That Move Conviction
Vague questions produce vague answers.
You want quantifiable metrics and directional sentiment, not generic opinions. Instead of asking "How is the company performing?" ask "What percentage of your Q3 orders were fulfilled on time compared to Q2?" Instead of "What do you think of the product?" ask "How many of your clients have renewed versus switched to competitors in the past six months?"
Frame questions around observable behaviour and measurable outcomes. Experts can tell you what they have seen and done. They cannot reliably predict the future or speak for people they do not know.
Compliance sits at the centre of question design. You cannot ask for material non-public information. You cannot solicit forward guidance, unreleased financials, or confidential strategic plans. The line is clear: you can ask about market trends, competitive positioning, and operational patterns. You cannot ask about specific numbers that have not been disclosed.
This is where AI becomes useful. Use AI to review your questions before they go to experts. A well-prompted model can flag MNPI risk, suggest reframes that preserve the intent while removing compliance exposure, and help you structure questions that produce actionable answers.
Chain-of-thought prompting works well here. Ask the AI to break down why a question might solicit MNPI, what alternative framing would be safer, and what follow-up questions would triangulate the same insight without crossing the line.
Persona-based prompting helps too. Have the AI assume the role of a compliance officer reviewing your questions, then the role of an expert deciding whether to answer. This dual perspective catches issues you might miss.
Compliance firewalls are non-negotiable. If you are on expert calls, you carry the MNPI risk. If the expert says something they should not, you now have a problem. The better model is to never be on the call. Let a third party conduct the interview, verify the responses, and deliver finished intelligence that has already been scrubbed for compliance issues.
Select Vendors Based on Total Cost, Not Invoice Price
Expert networks charge £1,200 per call. That is just the start.
Add the analyst time: scheduling, rescheduling, sitting through calls, taking notes, chasing transcripts. That is 14+ hours per month. Add the compliance risk: you are on the call, so any MNPI exposure sits with you. Add the quality risk: 40% of expert network calls deliver no useful insight because the expert is off-target, vague, or recycled from a database your competitors also use.
The real cost per useful insight is closer to £2,000 when you account for time, risk, and misses.
Survey platforms have similar issues. You pay for panel access, but you still design the survey, field it, clean the data, and interpret the results. Fraud is common. Response quality is inconsistent. You end up doing the research work yourself while paying for the privilege.
The Woozle model solves this. You submit a 10-minute brief. Woozle recruits fresh experts, conducts structured interviews, verifies every claim, and delivers finished intelligence in a dashboard. You never touch logistics. You never sit on a call. You never carry MNPI risk. The output is investment-grade and ready to drop into your memo.
This is not about replacing expert networks entirely. It is about using the right tool for the right job. For high-volume channel checks where compliance and speed matter, Woozle is the efficient choice. For one-off deep dives where you need to probe a specific expert's experience, a traditional network might still make sense.
The key is to stop defaulting to the same vendor for every project. Evaluate based on total cost, turnaround time, compliance risk, and output quality. The cheapest invoice is not always the best deal.
Verify, Triangulate, and Act
Raw intelligence is not alpha. Verified intelligence is.
Cross-reference expert claims with alternative data. If a supply chain contact says lead times are shortening, check shipping data, satellite imagery of warehouses, and job postings for logistics roles. If a customer says they are switching vendors, look at app download trends, web traffic, and credit card transaction data.
Triangulation removes noise. One expert might be an outlier. Three experts saying the same thing is a pattern. Five experts plus alternative data is a conviction-grade signal.
Watch for subtle red flags. If an expert is too polished, they might be a professional expert who has done 50 calls on the same topic. If their answers align perfectly with public guidance, they might not have real ground truth. If they hedge every statement, they might not have direct exposure to the issue you are investigating.
The best experts give specific examples, cite recent timeframes, and acknowledge what they do not know. They speak from experience, not opinion.
Once you have verified intelligence, translate it into position sizing. If the data confirms your thesis and the market has not priced it in, that is a sizing opportunity. If the data contradicts your thesis, that is a risk management signal. If the data is mixed, that is a reason to stay small or wait for more clarity.
Alpha comes from acting on verified intelligence before consensus catches up. The window is usually 2-3 weeks. After that, the information starts leaking into sell-side notes, investor calls, and eventually the stock price.
Run Post-Trade Reviews to Refine Your Process
Most analysts skip this step. That is a mistake.
After the trade plays out, go back and score your intelligence sources. Which experts were accurate? Which were wrong? Which alternative data sets confirmed or contradicted the primary research? Which questions produced actionable answers versus noise?
Track this over time. You will start to see patterns. Certain types of experts are more reliable than others. Certain vendors deliver better quality. Certain question structures produce clearer answers.
Use this feedback loop to improve your process. Refine your question design. Adjust your AI prompting. Change your vendor mix. Drop sources that consistently miss. Double down on sources that consistently deliver.
This is how you build a repeatable edge. Primary research is not a one-off project. It is a systematic process that compounds over time when you learn from each iteration.
The Real Edge Is Process, Not Access
Everyone has access to expert networks. Everyone can run surveys. Everyone can buy alternative data.
The edge is in how you use these tools. It is in asking the right questions, verifying the answers, and acting before the market does. It is in protecting analyst time so they focus on investment decisions instead of logistics. It is in removing compliance risk so you can move fast without exposure.
Most primary research fails because the process is broken. Analysts spend weeks on admin, pay for access instead of answers, and carry risk they should not have to carry. The vendors optimise for volume, not accuracy. The incentives are misaligned.
Fix the process and you fix the outcome. Start with clear questions tied to specific decisions. Use AI to remove compliance risk. Select vendors based on total cost and output quality. Verify everything. Act fast. Review and refine.
That is how you generate alpha through primary research. Not by doing more of the same, but by doing it correctly from the start.
What part of your current primary research process is costing you the most time or creating the most risk?