Back to Insights
Vendor RiskAI GovernanceNIS2Supply ChainThird-Party Risk

Third-Party AI Risk: Your Vendors Are Using AI — Here's Why That's Your Problem

February 8, 2026 · 6 min read

Remember when the biggest third-party risk was a vendor getting hacked?

That was simple. A breach at your supplier, some stolen credentials, maybe a compromised software update. You could model it, assess it, put it on a risk register. The attack surface was knowable.

Now your vendors are deploying AI agents that process your data autonomously — and most of them haven’t told you.

Your CRM vendor quietly rolled out an AI assistant that reads every support ticket, including the ones containing your customers’ personal data. Your cloud provider started training models on usage telemetry that includes your proprietary configurations. Your HR software vendor outsourced resume screening to an AI subprocessor you’ve never heard of — and that subprocessor runs inference on servers in a jurisdiction your DPO would lose sleep over.

This isn’t hypothetical. This is happening right now, across your vendor portfolio, and under NIS2 Article 21(2)(d), it’s explicitly your responsibility to manage it.

The New Third-Party Risk Landscape

Let’s be honest about what changed. Traditional third-party risk management — the kind most organisations still practice — was built for a world where vendors stored your data and occasionally processed it according to well-defined instructions. You could audit their SOC 2, check their encryption standards, review their data processing agreements, and sleep reasonably well at night.

3
Ways AI broke the model
NIS2
Art. 21(2)(d) — your responsibility
?
Vendors using AI on your data

AI broke that model in three fundamental ways.

Your Vendors Are Reading Your Data — Automatically

When a SaaS vendor deploys an AI-powered support system, every ticket you submit gets ingested by a language model. Your bug reports describing internal system architectures. Your feature requests revealing strategic priorities. Your complaints referencing specific employees or customers.

This isn’t a human support agent reading your ticket and forgetting about it. This is a model that may retain patterns from your data indefinitely, influence responses to other customers, or feed into training datasets that persist long after your contract ends.

In late 2025, a major enterprise collaboration platform was found to be using customer workspace data — including private channels and direct messages — to fine-tune its AI assistant features. The disclosure came not from the vendor, but from a security researcher who noticed suspiciously specific autocomplete suggestions that appeared to reference another company’s internal terminology. The vendor’s privacy policy technically permitted it. Buried in paragraph 47.

AI in SaaS Products Is Training on Your Data

The line between “using AI to improve our product” and “training AI on your data” is thinner than most vendors admit. When your project management tool uses AI to suggest task assignments, it’s learning from your team’s work patterns. When your analytics platform offers AI-generated insights, it’s building models that incorporate your business metrics.

Many vendors argue this falls under “product improvement” clauses that have existed in their terms of service for years. They’re not wrong about the contractual technicality. They’re wrong about the risk implications.

A model trained on aggregated customer data creates a novel exfiltration vector. Competitors using the same vendor could theoretically extract insights about your operations through carefully crafted prompts. This isn’t science fiction — research papers have demonstrated successful training data extraction from production language models since 2023.

Subprocessors Are Deploying AI Without Telling Anyone

Here’s where it gets truly opaque. Your vendor contracts with a subprocessor for specific services. That subprocessor, without updating your vendor, deploys AI to handle portions of their workload. Your data is now being processed by an AI system that neither you nor your direct vendor explicitly authorised.

This happens constantly in customer support chains. Your vendor outsources Level 1 support to a managed services provider. That provider deploys an AI triage system that reads, categorises, and sometimes responds to tickets before a human ever sees them. Your sensitive data passes through an AI system that exists in no one’s risk register.

The subprocessor chain for AI processing can run three or four layers deep. Your vendor uses an AI platform, which uses a cloud inference provider, which uses a model hosted by yet another company. Each layer introduces data handling you haven’t assessed and probably don’t know about.

NIS2 Makes This Your Problem

If you’re operating in the EU — and particularly if you’re a German company subject to the NIS2 transposition — this isn’t just a best-practice concern. It’s a legal obligation.

Article 21(2)(d): Supply Chain Security

NIS2 Article 21(2)(d) requires essential and important entities to implement cybersecurity risk-management measures that address “supply chain security, including security-related aspects concerning the relationships between each entity and its direct suppliers or service providers.”

The directive doesn’t carve out an exception for AI. If your vendor’s AI deployment introduces risk to your operations or data, you’re required to assess and manage it. The fact that the AI was deployed without your knowledge doesn’t absolve you — it makes the failure worse, because it means your assessment processes have a blind spot.

German transposition through the NIS2 Implementation Act (NIS2UmsuCG) reinforces this with specific requirements for supply chain risk assessment, including the obligation to consider “the overall quality of products and cybersecurity practices of suppliers and service providers, including their secure development procedures.”

An AI system processing your data is a cybersecurity practice. If you’re not assessing it, you’re not compliant.

GDPR Implications You Can’t Ignore

When your vendor’s AI processes personal data — and it almost certainly does if it touches support tickets, user analytics, or any customer-facing function — GDPR’s data processing requirements apply with full force.

Article 28 requires that processing by a processor shall be governed by a contract that stipulates, among other things, the subject matter and duration of processing, the nature and purpose of processing, and the type of personal data. If your vendor deployed AI processing that isn’t covered by your existing Data Processing Agreement, you have an Article 28 compliance gap.

More critically, Article 35 may require a Data Protection Impact Assessment for AI processing that involves “systematic and extensive evaluation of personal aspects relating to natural persons, including profiling.” Many AI systems in vendor products do exactly this — they evaluate, categorise, and make predictions about individuals based on personal data.

If your vendor is running AI that profiles your customers or employees and you haven’t conducted a DPIA, your exposure isn’t theoretical. It’s the kind of gap that triggers regulatory interest.

DORA ICT Third-Party Risk

For financial entities, DORA (Digital Operational Resilience Act) adds another layer. Chapter V establishes a comprehensive framework for ICT third-party risk management, including requirements to maintain a register of all contractual arrangements with ICT third-party service providers, assess concentration risk, and ensure contractual provisions address data processing specifics.

AI deployed by ICT service providers falls squarely within DORA’s scope. If your banking software vendor starts using AI agents to process transaction data, that’s a material change to your ICT service arrangement that requires assessment under DORA’s framework.

The convergence of NIS2, GDPR, and DORA creates a compliance triangle where third-party AI risk sits at the centre. Ignoring it isn’t just risky — it’s a multi-regulation violation waiting to happen.

5 Steps to Manage Third-Party AI Risk

Theory is useful. But you need a programme you can implement. Here are five concrete steps, drawn from what we’ve seen work with organisations navigating this exact challenge.

Step 1: Add AI Clauses to Every Vendor Contract

Your existing vendor contracts almost certainly don’t address AI adequately. Most Data Processing Agreements were written before vendors started deploying AI, and standard cybersecurity clauses don’t cover the unique risks of model training, inference processing, and autonomous data access.

At minimum, your contracts should include:

Disclosure obligations. The vendor must notify you in writing before deploying any AI or machine learning system that processes your data, including AI deployed by subprocessors. Notification must include the type of AI, what data it accesses, where processing occurs, and whether your data is used for model training.

Opt-out rights. You must have the contractual right to opt out of AI processing without degradation of core services. “Use AI or lose features” is not an acceptable position for a data controller.

Training data restrictions. Explicit prohibition on using your data to train, fine-tune, or improve AI models unless you provide specific, documented consent for each use case.

Model provenance requirements. The vendor must disclose the source and version of AI models used to process your data, including updates that change model behaviour.

Here’s language you can adapt: “Vendor shall not deploy, utilise, or permit any subprocessor to deploy artificial intelligence, machine learning, or automated decision-making systems that process Customer Data without prior written notification to Customer, including a description of the AI system, the data it accesses, the purpose of processing, and the location of inference. Customer retains the right to prohibit such processing within 30 days of notification.”

Step 2: Deploy an AI-Specific Vendor Questionnaire

Your standard vendor security questionnaire asks about encryption, access controls, and incident response. It probably doesn’t ask about AI. You need a supplementary questionnaire — or an updated one — that specifically addresses AI risk.

Key questions to include:

  • AI inventory: Does your organisation use AI or machine learning systems? If yes, list all AI systems that process, access, or could access our data.
  • Data usage: For each AI system, what categories of our data does it access? Is our data used for model training or improvement? Can you technically separate our data from AI training pipelines?
  • Processing location: Where does AI inference occur? Are AI models hosted by third parties? If yes, identify all parties involved in the inference chain.
  • Model governance: What model versions are currently deployed? How are model updates tested before deployment? Do you conduct bias and fairness assessments?
  • Subprocessor AI: Do any of your subprocessors use AI to process our data? How do you monitor and control subprocessor AI deployments?
  • Data retention: How long does AI-processed data persist? Can you delete our data from model weights if requested? What is your process for responding to data deletion requests that involve AI-processed data?

This questionnaire should be mandatory for any vendor classified as Tier 1 or Tier 2 in your vendor risk programme. For Tier 3 vendors, a simplified self-attestation may suffice — but only if they don’t process sensitive data.

Step 3: Implement Tiered Assessment Based on AI Data Access

Not all vendor AI deployments carry the same risk. A vendor using AI to optimise their internal marketing emails is different from a vendor using AI to process your customers’ health records. Your assessment programme needs to reflect this.

Tier 1 — High Risk: Vendors whose AI systems process personal data, financial data, health data, or proprietary business information. These require full assessment: detailed questionnaire, contract review, technical evaluation, and potentially on-site or virtual audit. Reassessment annually or upon any material change in AI deployment.

Tier 2 — Medium Risk: Vendors whose AI systems process operational data that doesn’t include personal data or trade secrets. These require standard assessment: questionnaire, contract review, and documentation review. Reassessment every 18 months.

Tier 3 — Low Risk: Vendors whose AI systems don’t process your data, or vendors with no AI deployment. These require basic assessment: self-attestation and periodic confirmation. Reassessment every 24 months.

The critical point: tier assignment must be dynamic. A vendor that was Tier 3 yesterday becomes Tier 1 the moment they deploy an AI chatbot that reads your support data. Your programme needs mechanisms to detect these changes — which leads to Step 4.

Step 4: Continuously Monitor for New AI Deployments

Annual vendor assessments are insufficient for AI risk. Vendors deploy new AI capabilities on product release cycles — monthly, sometimes weekly. If you’re assessing AI risk once a year, you’re operating on stale information for eleven months.

Effective continuous monitoring includes:

Vendor communications tracking. Monitor vendor release notes, blog posts, product updates, and terms of service changes for mentions of AI, machine learning, automation, or related terminology. This sounds tedious. It is. Automation helps enormously.

Technical monitoring. Where possible, monitor API responses and product behaviour for indicators of AI processing. New response patterns, changes in processing latency, or the appearance of AI-generated content in vendor outputs can signal undisclosed AI deployments.

Contractual triggers. Require vendors to notify you of AI deployments proactively (per Step 1). Back this up with periodic attestation requests — quarterly for Tier 1 vendors — asking whether any new AI systems have been deployed since the last assessment.

Industry intelligence. Track industry news and security research for reports of vendor AI deployments. Sometimes the security community discovers vendor AI use before the vendor discloses it.

Step 5: Secure the Contractual Right to Audit AI Processing

The right to audit is standard in most vendor contracts. The right to audit AI processing specifically is not. You need both.

Your AI audit rights should include:

  • The right to request and receive documentation of all AI systems processing your data, including model cards, training data descriptions, and performance metrics.
  • The right to conduct or commission technical assessments of AI systems, including adversarial testing, bias evaluation, and data leakage analysis.
  • The right to review AI-related incident logs, including instances of hallucination, data exposure, or unexpected behaviour.
  • The right to receive evidence of AI model deletion upon contract termination — not just data deletion, but confirmation that your data has been removed from training datasets and model weights to the extent technically feasible.

For Tier 1 vendors, consider requiring an annual AI-specific audit right independent of your general audit provisions. AI systems change frequently enough that bundling AI audits with general security audits often means AI gets superficial coverage.

How We Approach This at dig8ital

We built the Vendor Risk Agent because we faced this exact problem with our own clients — and realised that managing third-party AI risk manually doesn’t scale.

The Vendor Risk Agent automates the heavy lifting across all five steps:

Automated questionnaire generation. Based on vendor tier and data classification, the agent generates tailored AI risk questionnaires, distributes them, tracks responses, and flags gaps or concerning answers.

Contract clause tracking. The agent monitors your vendor contracts for AI-related provisions, identifies gaps against your policy requirements, and generates remediation recommendations with specific language you can send to your legal team.

Continuous monitoring. The agent tracks vendor product releases, terms of service changes, and public AI announcements across your entire vendor portfolio. When a vendor announces a new AI feature that could affect your data, you get an alert — not six months later during the next assessment cycle.

NIS2 compliance mapping. Every vendor AI risk is mapped to specific NIS2 requirements, giving you audit-ready documentation that demonstrates compliance with Article 21(2)(d) supply chain security obligations.

The result isn’t just better risk visibility. It’s a programme that actually keeps pace with how fast vendors are deploying AI — which, right now, is very fast indeed.

What Happens If You Don’t Act

The window for getting ahead of this is closing. Vendors are deploying AI at an accelerating rate. Regulators are sharpening their focus on supply chain risk. And the first enforcement actions under NIS2 will inevitably target organisations that failed to assess obvious, well-publicised risks in their vendor relationships.

Third-party AI risk isn’t a future problem. It’s a current gap in most organisations’ risk programmes. The good news: the five steps above are implementable. You don’t need to boil the ocean. You need to start with your Tier 1 vendors, get AI clauses into your next contract renewals, and build monitoring from there.

Ready to get started? Download our Third-Party AI Risk Questionnaire template — it’s the same one we use with our clients, covering all the questions from Step 2 plus scoring guidance.

Or if you want to see how the Vendor Risk Agent handles this end-to-end, get in touch. We’ll walk you through a live assessment of your vendor portfolio. No slides. Just your actual vendors, assessed against the framework in this article.

Your vendors are using AI. The only question is whether you know about it — and whether you’re managing the risk before a regulator asks you to prove it.

Need help with this?

We help enterprise security teams implement what you just read — from strategy through AI-powered automation. First strategy session is free.

More Insights