Skip to content
Cyber

AI’s increasing role in data leaks

The Baldwin Group
|
Updated: March 16, 2026
|
6 minute read

Data risk has always been dynamic, and the proliferation of artificial intelligence (AI) is accelerating both the speed and scale of digital exposure. Today, AI is introducing new and often less visible pathways for sensitive information to escape organizational control.

Unlike traditional breaches driven by external attackers, AI-related data leaks often stem from well-intentioned internal use. The result is a growing category of exposure that blends human error, automation, and opaque systems—challenging existing data governance models.

As AI-driven systems touch more sensitive data, organizations face heightened cyber exposure alongside new questions around insurance coverage, regulatory accountability, and executive oversight. Understanding how AI contributes to data leaks is now essential for organizations managing cyber risk, regulatory obligations, and reputational trust.

AI does not create risk in isolation. Instead, it amplifies existing vulnerabilities by accelerating how data is accessed, processed, and reused. AI reshapes data exposure via:

  • Scale and speed – AI systems process massive volumes of data quickly, so a single misconfiguration or misuse can expose far more information than traditional tools.
  • Opacity – Many AI models operate as “black boxes,” making it difficult to track exactly how data is used, stored, or inferred.
  • Blended environments – AI tools often sit outside traditional IT boundaries, especially when employees use third-party or consumer-grade platforms.

These dynamics make AI-related leaks harder to detect, attribute, and remediate, particularly when no malicious actor is involved.

Traditional data-loss prevention strategies assume relatively static systems and clear data boundaries. AI challenges those assumptions due to:

  • Difficulties tracing how data flows through models and outputs
  • Expanding privacy and AI-specific regulations increasing compliance exposure
  • Breach, misuse, or design flaw attribution challenges

From a risk perspective, this blurring complicates incident response, regulatory notification, and insurance recovery.

As AI becomes embedded in daily business operations, data leaks are increasingly linked to how these systems are used, trained, and integrated. The examples below highlight common scenarios where AI can unintentionally expose sensitive information, often without triggering traditional breach indicators.

One of the most prevalent AI-related data risks arises when employees enter sensitive information such as contracts, customer data, source code, or financial details into generative AI tools to summarize, analyze, or draft content. Even in the absence of a breach, this information may be retained, logged, or resurface in future outputs, creating unintended exposure without clear policies and controls in place.

If organizations use internal data to train or fine-tune AI models, weak anonymization or governance may allow sensitive information to become embedded in model outputs. This exposure is particularly acute in industries handling health, financial, or personal data, where models may reproduce or enable reconstruction of protected information.

AI-powered systems automate customer interactions, underwriting, fraud detection, and claims processing, increasing the risk of over-sharing, improper access, or accidental disclosure due to errors in logic, permissions, or integrations.

Because these processes operate continuously, issues may persist unnoticed for extended periods.

AI capabilities embedded in vendor platforms such as CRM, HR, marketing, and analytics tools introduce new data-sharing relationships and uneven security standards. When a data leak originates within a partner’s AI system, contractual ambiguity can make responsibility and liability unclear.

While AI introduces new data-leak pathways, it is also becoming a powerful tool for detecting, preventing, and responding to cyber threats, including the very exposures it can create.

Security teams are increasingly using AI to:

  • Identify anomalies faster by analyzing network traffic, user behavior, and access patterns at scale
  • Detect insider risk and misuse by flagging unusual data access or AI tool usage
  • Improve incident response through automated triage, prioritization, and containment
  • Reduce alert fatigue by filtering noise and surfacing high-confidence threats

In the context of data leaks, AI-enabled monitoring can help organizations spot subtle indicators that traditional controls may miss.

In 2025, organizations that did not use AI or automation had an average breach cost of $5.52M, while those that used these technologies extensively had an average breach cost of $3.62M. Source: IBM

However, AI’s defensive value depends on how intentionally it is deployed. Models trained on incomplete or biased data can miss threats or generate false confidence. Automated tools without human oversight may act quickly, but not always correctly. And security teams still need visibility into how AI systems make decisions to validate outcomes.

The takeaway is not that AI offsets its own risks automatically. Rather, AI can strengthen cyber defense when paired with strong governance, human judgment, and clear accountability.

Organizations that integrate AI into security operations thoughtfully are better positioned to manage both the benefits and the exposures that come with it.

From an insurance perspective, AI-related data leaks complicate traditional cyber loss scenarios, creating challenges around coverage interpretation, attribution of fault, and policy alignment—particularly when cyber, technology E&O, and management liability policies intersect.

Directors and officers may face heightened scrutiny around duty of care and risk oversight, including whether AI risks were identified and assessed, data protection responsibilities were clearly assigned, and appropriate controls and escalation mechanisms were in place. In the event of a significant AI-related data leak, questions of disclosure and governance can extend beyond the IT function, potentially triggering management liability considerations alongside cyber claims.

Insurers and regulatory bodies are placing greater emphasis on how risk is governed before a loss occurs, not just the incident itself. Underwriters increasingly assess AI-enabled security controls as a positive factor when they are demonstrable, documented, and governed. When paired with employee training and clear AI-use policies, AI-driven monitoring, access controls, and incident response are viewed as indicators of mature cyber risk management.

AI risk doesn’t stop at a data breach. It can extend into D&O, employment practices, and fiduciary liability — often in ways organizations don’t anticipate. Explore how AI is reshaping management and cyber liability, and what you can do to stay ahead of it.

Regulators are paying closer attention to how AI systems handle personal and sensitive data, with privacy frameworks increasingly emphasizing purpose limitation, data minimization, and decision explainability.

Reducing AI-related data-leak exposure is therefore not only a cybersecurity priority—it is a loss prevention and insurability issue. As insurers evaluate how organizations deploy and govern AI, proactive, well-documented controls increasingly influence underwriting decisions, coverage terms, and pricing. Key priorities include:

  • Clear AI use policies that define acceptable use and can be demonstrated during underwriting or post-incident review
  • Data classification and segmentation to restrict sensitive information and limit exposure in AI training and workflows
  • Vendor and contract oversight to clarify data responsibility, security expectations, and breach obligations
  • Employee guidance and training focused on practical, real-world AI use
  • Insurance alignment to ensure cyber, privacy, and technology E&O coverage reflects actual AI deployment

Ideally, AI governance should be integrated across other governance practices as part of a comprehensive risk mitigation
framework. Explore our cyber checklist to help establish an oversight structure that supports safe, responsible, and scalable AI use.

AI will continue to evolve—and so will the ways it interacts with sensitive information. Clear governance, cross-functional alignment, and aligned strategies are crucial to managing AI-related data leak exposure.

As regulatory expectations rise and insurers place greater emphasis on demonstrable oversight, organizations that take a proactive, structured approach will be better positioned to protect data, manage liability, and support insurability. The Baldwin Group’s Cyber Center of Excellence brings together cybersecurity, data governance, and insurance expertise to help organizations assess AI-driven risk, strengthen controls, and align coverage with real-world exposure.

Connect with our team today to build a strategy that helps safeguard your organization and empower resilience.

Related Insights

Stay in the know

Our experts monitor your industry and global events to provide meaningful insights and help break down what you need to know, potential impacts, and how you should respond.

Real Estate
Managing pickleball noise complaints: How associations are responding
As pickleball continues to grow in popularity, many condominium associations and homeowners’ associations are facing an increase in resident complaints...
Cyber
The quiet cyber threat hiding in encrypted data
Encryption has long served as the backbone of digital security. Financial transactions, intellectual property, healthcare records, and critical infrastructure data...
Captives
 A guide for construction leaders evaluating group captives
Entering 2026, construction firms are navigating sustained pressure on casualty, auto, and umbrella programs. Pricing is volatile, capacity is more...
Health and Wellness
Sweat equity: How active lifestyles impact insurance
From fitness classes to rock climbing trips, more employees are embracing active lifestyles to maintain health and wellness. For individuals...
Cyber
The hidden costs of cybersecurity sprawl
As cyber threats have grown more complex, many organizations have adopted a seemingly straightforward response: more solutions equal greater protection....
Let's make it possible

Partner with us to build solutions that align with your business, individual, or employee needs and open new possibilities for your future.

Connect with us