How to Assess AI Disintermediation Risk in Private Equity Due Diligence

A practical guide for PE deal teams on evaluating whether a target's business model will be disrupted or strengthened by AI — including frameworks, red flags, and the primary research methods that separate real moats from marketing.

How to Assess AI Disintermediation Risk in Private Equity Due Diligence
Photo by Steve A Johnson / Unsplash

Technology due diligence has pulled decisively ahead of every other domain in M&A. According to the 2026 SRS Acquiom/Mergermarket study of 150 senior executives at U.S. investment banks, 47% of respondents say technology diligence has been their main priority over the past 12 months — and 51% now call it the single most burdensome element of the entire review.

The reason is straightforward: AI is repricing entire sectors in real time.

Software stocks have lost more than 20% of their value — over $1 trillion in market capitalisation — in what's been dubbed the "SaaSpocalypse." The IGV ETF alone lost roughly 30% from its September 2025 peak in just six trading sessions, erasing more than $830 billion in market cap. Autonomous AI agents are rapidly making the traditional per-seat licensing model look obsolete, and investors are scrambling to separate the companies that will be strengthened by AI from those that will be eaten by it.

Private equity firms are feeling the pressure acutely. The median holding period for a US PE-backed company hit 3.4 years in 2024 — the longest in nearly a decade — and LPs are demanding capital returns after a prolonged deal slowdown. Every new platform investment now carries the implicit question: Will this business still be relevant at exit?

Nearly three-quarters (73%) of deal professionals expect the due diligence process to become more complex over the next 12–24 months. AI disintermediation risk is the primary driver of that complexity.

This guide is a practical playbook for PE deal teams running live processes. It covers how to categorise AI risk, what questions to ask, where data rooms fall short, and how primary research fills the gap that no dataset or AI tool can.


What Is AI Disintermediation Risk? (And What It Isn't)

AI disintermediation risk is the structured evaluation of whether a target company's business model, revenue streams, competitive moat, and market position will be eroded, disrupted, or rendered obsolete by artificial intelligence — or conversely, whether AI will strengthen and expand its value proposition.

It sits at the intersection of traditional commercial due diligence and technology diligence, and it is rapidly becoming a standalone workstream. At its core, it requires answering two questions:

  1. Where can AI unlock meaningful value creation within the target's existing operations?
  2. How exposed is the business to AI-based disintermediation — whether from incumbents adopting automation or from new AI-native players?

The Revolution / Transformation / Augmentation Spectrum

Bain & Company's framework, developed from the analysis of more than 300 companies, classifies targets into three tiers:

  • Revolution: AI can put the fundamental business model at risk. Survival means reinventing products and go-to-market entirely. Less than 10% of companies fall here.
  • Transformation: The business model will need substantial changes. AI can create new revenue streams and efficiencies, but requires significant process overhaul. Most software companies reside here.
  • Augmentation: AI enhances existing operations without posing an existential threat. Most industrial companies sit here, with many healthcare businesses straddling augmentation and transformation.

This classification is your starting point. Every deal should begin with a triage that places the target on this spectrum — because the questions you ask, the risks you model, and the value creation plan you build all depend on where the target sits.

Common Misconceptions That Lead to Bad Deals

"AI risk only applies to software companies." It doesn't. Professional services, healthcare, education, and financial services all face AI-driven margin compression. A BPO target whose margins depend on labour arbitrage is in the revolution category whether or not it has a single line of code.

"If a company uses AI, it's safe." MIT's Project NANDA research found that 95% of generative AI investments have produced no measurable returns. Only 6% of companies achieve meaningful EBIT impact from AI. Using AI is not the same as being defensible against AI.

"AI disintermediation is binary — you're disrupted or you're not." The spectrum above exists for a reason. Most targets are somewhere in the middle, and the diligence question is about degree and timeline, not a yes/no verdict.

"AI-washing is easy to spot." In 2024, the SEC charged two firms with misleading investors about their use of AI. In 2025, Builder.ai was reported to have faked business deals to inflate its value. Over a third of pitch decks now reference machine learning or predictive intelligence. Distinguishing genuine innovation from empty claims requires more than reading a CIM.


The Five Questions Every Deal Team Must Answer

Adapted from Bain's framework for PE investors, these five questions should structure every AI disintermediation assessment. Think of them as the minimum viable scope for an AI diligence workstream.

1. Will AI Upend the Target's Business Model?

This is the existential question. Can AI enable customers to do what the target does for themselves? Can it enable a new entrant to deliver the same outcome at a fraction of the cost? If the target is a middleware layer between data and a decision, and AI can collapse that layer, the business model is at risk.

What to look for: Targets whose core value is aggregation, translation, or intermediation between inputs and outputs. Horizontal SaaS tools that sit between a user and a task that AI can perform directly.

2. Will Volumes or Pricing Be Compressed?

This is the seat compression question. If AI agents can perform tasks that previously required teams of employees, companies may need far fewer software licences. A target that prices per seat and sells into departments that are shrinking due to automation faces direct revenue risk.

What to model: Scenarios where seat counts decline 20–50% due to AI agent adoption. Assess whether the target can transition to outcome-based or consumption-based pricing without destroying net revenue retention.

3. Is the Competitive Moat Real or Illusory?

This is where most diligence processes fall short. The CIM says the target has "proprietary AI." The data room contains technical documentation. But none of that tells you whether the moat is real. The critical sub-questions:

  • Is the model truly proprietary, or is it a wrapper on a third-party API?
  • Is the training data differentiated and defensible?
  • Do customers actually value the AI features, or are they paying for the underlying workflow?
  • Are competitors building something better, faster?

How to answer: Primary research. Customer interviews, competitor intelligence, and technologist assessments. There is no substitute.

4. Is the Target AI-Ready?

Even if AI is an opportunity, can the target actually execute? Common pitfalls include fragmented data architecture, legacy technical debt, lack of AI talent, and change resistance. Forty-one percent of SaaS CEOs cite lack of technical talent as their top barrier to AI adoption.

What to evaluate: Data quality and architecture, engineering team capabilities, technical debt burden, and whether leadership has a credible AI roadmap — not just a slide deck.

5. Where Does AI Create Value Post-Close?

The best acquirers don't just assess risk — they use diligence to identify where AI can drive cost efficiency, improve throughput, and unlock new revenue streams. Any AI bet as a portfolio value driver must be grounded in validated use cases with a measurable line of sight to EBITDA in six months or less.

What to quantify: Specific operational improvements (e.g., AI-assisted customer support reducing cost-to-serve by 30%), new product opportunities (e.g., AI-powered analytics tier), and margin expansion from internal AI adoption.


How to Evaluate an AI Moat: Four Dimensions

Not all moats are created equal. PwC identifies four valuation drivers that determine whether a target's AI positioning is truly defensible — and the market is pricing them accordingly.

1. Proprietary Data Assets

The strongest businesses generate proprietary context that makes AI better over time in ways competitors can't easily replicate — curated knowledge graphs, validated playbooks, and customer-specific configurations that cannot be scraped or synthetically reproduced.

Companies with proprietary models and data typically command valuation multiples of 9–12× ARR, compared to 3–4× ARR for those relying on third-party APIs. That's a 3× valuation difference based on data defensibility alone.

Diligence question: Does the target generate unique data through its product usage that improves its AI capabilities in a way competitors cannot replicate?

2. Workflow Embeddedness

If a product owns the system of record and is tied to a financial or regulatory outcome, AI agents tend to layer on top rather than replace. The deeper a product is embedded in a customer's daily workflow, the higher the switching cost — and the lower the disintermediation risk.

Diligence question: Is this a "system of record" or a "nice to have"? Would ripping it out require the customer to rebuild critical processes?

3. Regulatory and Compliance Lock-In

Technology serving industries characterised by high regulatory scrutiny and operational complexity — healthcare, financial services, government — is more insulated because compliance safeguards aren't easily replicated by AI. Regulatory moats create structural stickiness that transcends technology cycles.

Diligence question: Does the target's value proposition depend on compliance logic, audit trails, or regulatory certifications that AI-native entrants would need years to obtain?

4. Embedded Fintech and Transaction Streams

Embedded finance is projected to exceed $7 trillion in transaction volume by 2026. For vertical SaaS companies, payment processing fees add 2–3× revenue uplift per customer compared to pure SaaS. Revenue tied to transactions rather than headcount is structurally more resilient to seat compression.

Diligence question: Does the target earn revenue from transactions, payments, or outcomes — or solely from per-seat licences tied to employee headcount?


Where VDRs and Datasets Fall Short — And What to Do Instead

The standard diligence toolkit — VDRs, financial models, CIM reviews, management presentations — was built for a world where the key questions were quantitative. Revenue growth, churn rates, margin profiles, customer concentration. These are all answerable from documents.

AI disintermediation risk is different. The hardest, highest-value questions are qualitative:

  • Is the moat real? No data room contains a document titled "Honest Assessment of Whether Our AI Is Actually Proprietary."
  • Will customers switch? NRR tells you what happened last quarter. It doesn't tell you what happens when an AI-native competitor launches at half the price next quarter.
  • Are competitors building something better? Competitive intelligence doesn't live in a VDR. It lives in the heads of former employees, channel partners, and industry technologists.
  • Is the AI real or marketing? With 95% of AI pilots failing to produce measurable returns, "we have AI" in a CIM is not evidence. It's a claim.

These questions can only be answered through structured primary research: direct conversations with the people who know.

The Three Primary Research Workstreams for AI Diligence

Customer interviews: Talk to current and former customers. Do they actually use the AI features? Would they switch to an AI-native alternative? Is the switching cost real or theoretical? What would make them leave?

Competitor and adjacent-player intelligence: What are competitors building? How quickly? What's their go-to-market? Are AI-native startups targeting the same customer base? How do channel partners view the competitive landscape?

Technologist and domain expert assessments: Is the target's tech stack genuinely differentiated? Is the "proprietary AI" actually just API calls to a third-party model? How significant is the technical debt? Could a well-funded competitor replicate the product in 12 months?

This is fundamentally primary research work that no VDR, no dataset, and no AI tool can replace. The challenge for deal teams is execution: in a 4–8 week diligence window, scheduling 20–30 expert calls, writing discussion guides, conducting interviews, and synthesising findings is a massive time sink.

This is exactly where a done-for-you primary research partner changes the equation. Instead of your deal team spending weeks coordinating calls and reading transcripts, you brief the research need and receive finished, actionable insights — customer sentiment analysis, competitive threat assessment, technology validation — ready for your investment committee memo. Get in touch with our team to see how we handle AI diligence research for PE firms running live processes.


Red Flags and Green Flags: An AI Diligence Scorecard

Use this as a practical checklist during any deal involving technology exposure. It won't replace deep diligence, but it will help you triage quickly and focus your research where it matters most.

Red Flags 🚩

  • Wrapper-on-API with no proprietary data: The target's "AI" is built entirely on third-party models (e.g., OpenAI, Anthropic) with no unique training data or fine-tuning. Valuation should reflect API-dependent multiples (3–4× ARR), not proprietary AI multiples.
  • Per-seat pricing with no transition plan: Revenue is 100% tied to headcount in departments likely to shrink due to AI automation, and management has no credible plan to shift pricing models.
  • AI claims unsupported by financial results: The CIM highlights AI capabilities, but there's no measurable impact on revenue growth, retention, or margins. Remember: only 6% of companies achieve meaningful EBIT impact from AI.
  • No system-of-record status: The product is a "nice to have" — a layer that can be bypassed, replaced, or absorbed into a platform. AI agents don't need nice-to-haves.
  • High technical debt and fragmented data: Legacy architecture that prevents the target from effectively deploying or benefiting from AI. This suppresses both defensibility and value creation potential.
  • Management can't articulate AI strategy beyond buzzwords: If the CEO can't explain specifically how AI will affect their customers, competitors, and pricing model, they haven't done the work.
  • Customer concentration in AI-vulnerable segments: Revenue heavily dependent on customers in sectors undergoing rapid AI-driven headcount reduction.

Green Flags ✅

  • Proprietary training data with network effects: The product generates unique data through usage that improves the AI over time — and this data cannot be scraped or synthetically reproduced.
  • System-of-record status in a regulated industry: The product is the operational backbone for customers in healthcare, financial services, or government, with compliance logic that creates structural switching costs.
  • Outcome-based or usage-based pricing already in place: Revenue is tied to transactions, outcomes, or consumption — not to headcount. This model is structurally aligned with an AI-enabled world.
  • Embedded fintech revenue: Payment processing, lending, or insurance embedded in the product creates revenue streams that grow with customer transaction volume, not employee count.
  • Deep vertical expertise: Battery Ventures confirms that tech companies with a deep grasp of end-markets are more resilient to AI disintermediation. Vertical SaaS continues to outpace horizontal platforms in private deal flow.
  • Demonstrable customer reliance on AI features: Validated through primary research — customers actively use and value the AI capabilities, and they cite them as reasons for renewal.
  • Management team with technical depth: Leadership includes people who understand AI at an implementation level, not just a strategic narrative level, and can execute a credible roadmap.

Pricing Model Risk: Modelling the Per-Seat-to-Outcome Transition

The shift from per-seat to outcome-based pricing is one of the most consequential — and least understood — dynamics in AI-era deal modelling. Get it wrong, and your underwriting assumptions collapse mid-hold.

Why It Matters

Traditionally, SaaS sold access to a tool. Now, agentic software is about delivering outcomes. If AI reduces the need for human workers in a customer's department, per-seat revenue shrinks mechanically — even if the customer is perfectly happy with the product.

Consider a target that sells software to law firms priced per paralegal seat. If AI reduces the need for paralegals by 40%, revenue drops by 40% — regardless of product quality or customer satisfaction.

How to Model It

Scenario 1 — Status quo: No pricing model change. Model seat count decline of 20–50% across the customer base over the hold period. This is your downside case.

Scenario 2 — Managed transition: The target shifts to outcome-based or consumption-based pricing over 12–24 months. Model the NRR impact during transition (expect a dip), the new steady-state revenue per customer, and the expanded addressable market. Note: AI company gross margins typically run 50–60%, compared to traditional SaaS at 80–90%. Factor this into EBITDA projections.

Scenario 3 — AI-enabled expansion: The target uses AI to expand its value proposition — new product tiers, new use cases, new customer segments. This is the upside case, but it requires evidence from primary research that customers would actually pay for expanded capabilities.

Key metric to stress-test: Revenue per customer, not revenue per seat. If a target can maintain or grow revenue per customer even as seat counts decline, the pricing transition is working.


Turning AI Risk Into Value Creation

The smartest dealmakers aren't just using AI diligence to kill deals — they're using it to uncover how AI could unlock new efficiencies, growth levers, and entirely new business models. Most acquirers report that AI diligence has convinced them to walk away from at least one deal. But the best ones are also finding that the AI narrative, properly investigated, reveals value creation opportunities the seller hasn't yet captured.

Building AI Into the 100-Day Plan

Diligence findings should feed directly into a post-close value creation roadmap. BCG estimates that 70% of potential value from AI is concentrated in core business functions — meaning the opportunities are in operations, not moonshots.

Practical priorities for the first 100 days:

  • Quick wins: Internal AI deployment for customer support, sales enablement, or content generation. These can show EBITDA impact within 3–6 months.
  • Product roadmap: AI-powered features identified during diligence that customers validated as valuable. Prioritise features that deepen workflow embeddedness and expand data moats.
  • Pricing model evolution: If diligence revealed per-seat risk, begin planning the transition to outcome-based pricing with specific timelines and customer communication strategies.
  • Data architecture remediation: If technical debt or data fragmentation was flagged during diligence, address it early — it's a prerequisite for everything else.

The key discipline: any AI bet as a portfolio value driver must be grounded in validated use cases with a measurable line of sight to EBITDA in six months or less. Diligence gives value creation teams a head start on shaping these plans — but only if the diligence was thorough enough to surface specific, actionable opportunities.


Illustrative Scenarios: How AI Risk Varies by Target Type

AI disintermediation risk is not one-size-fits-all. The technology's impact varies significantly based on where the target plays within its industry. Here are three illustrative scenarios to show how the same framework produces very different conclusions.

Scenario A: Horizontal Collaboration Tool (Revolution Risk)

A project management SaaS with per-seat pricing selling into mid-market companies. Revenue is tied directly to employee headcount. The product aggregates tasks and communication but doesn't own a financial or regulatory outcome. AI agents can increasingly coordinate tasks, generate status updates, and manage workflows natively.

Verdict: High disintermediation risk. The product is a coordination layer that AI can collapse. Per-seat revenue will compress as teams shrink. Limited data moat — task data is not proprietary in a defensible way.

Scenario B: Vertical SaaS With Embedded Payments (Augmentation)

A practice management platform for dental offices with integrated payment processing, insurance claims, and patient scheduling. Deep regulatory compliance logic. Revenue split: 40% SaaS subscriptions, 60% embedded fintech (payment processing fees on patient transactions).

Verdict: Low disintermediation risk. System-of-record status in a regulated vertical. Transaction-based revenue grows with patient volume, not headcount. AI enhances the platform (e.g., automated appointment reminders, predictive scheduling) rather than replacing it. Strong data moat from years of practice-specific operational data.

Scenario C: BPO / Outsourced Services (Revolution Risk)

A business process outsourcing firm providing data entry, document processing, and basic analytical services to financial institutions. Margins depend on labour arbitrage. Pricing is cost-plus based on headcount deployed.

Verdict: Very high disintermediation risk. AI can perform core tasks faster, cheaper, and more accurately. The entire value proposition — human labour at lower cost — is directly threatened. Primary research with customers would likely reveal they're already piloting AI alternatives internally.


A Note on Timing: The Disruption Timeline May Be Longer Than the Market Thinks

It's worth noting a contrarian view supported by the data: the AI disruption narrative is real, but the timeline may be more gradual than recent price action suggests.

AI-related disruption accounts for approximately 30–40% of the public-market software sell-off, with the rest driven by pre-existing revenue deceleration and budget reallocation to AI infrastructure. Gartner projects that over 40% of agentic AI projects will be cancelled by the end of 2027. And Battery Ventures is taking a long view, arguing that the AI transformation will take many years to play out.

For PE deal teams, this creates a nuanced opportunity. Companies with intact moats whose valuations have been dragged down by sector-wide panic may represent compelling entry points. Strategic and financial acquirers including KKR, Blackstone, and Vista Equity Partners have continued to pay 7× to 32× revenue multiples for software businesses with structural moats, even amid the broader repricing.

The key is distinguishing between targets that are genuinely at risk and those that have been unfairly caught in the downdraft. That distinction, once again, comes down to primary research.


Conclusion: The Primary Research Imperative

AI disintermediation risk assessment is no longer optional in any technology-adjacent acquisition. It directly impacts entry valuation, hold-period risk, and exit multiples. But the most important insight from this guide is this: the highest-value questions in AI diligence cannot be answered by documents, datasets, or AI tools.

Whether a target's moat is real. Whether customers would switch to an AI-native alternative. Whether competitors are six months away from launching something better. Whether the "proprietary AI" is actually just a thin wrapper on a third-party API. These are questions that require talking to the people who know — customers, competitors, technologists, and channel partners.

That's primary research. And in a compressed diligence timeline, the deal teams that get those answers fastest and most reliably are the ones making better investment decisions.

Woozle Research exists for exactly this moment. Our done-for-you model means you brief us on the target and the questions, and we deliver finished research — expert interviews, customer sentiment, competitive intelligence — ready for your IC memo. No scheduling, no transcript synthesis, no wasted weeks. Just the insights you need to make the call.

Running a deal where AI risk is a question? Talk to our team about how we can support your AI diligence with primary research that no data room can replace.