We asked privacy leaders about AI at the IAPP Global Summit. Here's what they said.

April 3, 20263 min read

Every year, IAPP Global Summit is where the privacy world takes stock of what’s changed. This year, the shift was clear: AI is already operational across most organizations, but for compliance teams, the work of governing it is just getting started.

We ran a short AI readiness survey at the Transcend booth, capturing responses from privacy and compliance leaders that reinforce a pattern we’re seeing more broadly: AI deployment is accelerating, but the governance infrastructure underneath it is lagging.

AI is already in production across most organizations

70% of respondents said their organization had made some or significant AI progress, with use cases already in production. Only a small fraction were still in early exploration, or had made no meaningful progress at all.

This isn’t a pilot-stage audience. These are organizations that have already crossed the threshold from evaluating AI to operating with it. The compliance implications of that shift are significant.

But governance infrastructure hasn’t kept pace

The most basic form of governance is knowing what you have. Yet fewer than one in five respondents said their organization has a complete, actively maintained inventory of which AI systems have access to customer data, even as most already have AI in production. Nearly two-thirds are still building it.

That gap between production and visibility is one of the most common compliance risks organizations face today.

But the impact shows up even more clearly in day-to-day operations.

A third of respondents said they are still managing AI-related DSRs manually, case by case, with another 15% not addressing them at all. Only about one in five has operationalized a defined process.

Consent shows a similar pattern. While 35% report having a framework for AI training data, only 15% have fully implemented and consistently enforced those policies in practice.

Taken together, the responses point to a consistent pattern: organizations are scaling AI faster than they’re building the systems required to govern it.

Privacy teams aren’t resistant to AI, but they want the right tools

The most interesting finding in the survey wasn’t about gaps. It was about appetite.

When we asked how respondents would feel about a data compliance platform with built-in agentic AI features like Transcend’s Agentic Assist, 81% said they were open to it. More than half of that group said they’d want it specifically with strong controls: guardrails, an on/off switch, and the ability to disable it at any time.

The responses offer a clear view of what “AI done right” looks like for compliance professionals. They want productivity gains. They want to move faster. They just want to hold the keys.

The shift toward agentic models in compliance

What emerges from these responses is not a privacy function that’s behind in AI, but one that’s been given a broader mandate without the infrastructure to support it. Teams are expected to govern systems they can’t fully inventory, operationalize DSRs without defined workflows, and enforce consent policies that aren’t consistently applied, all while the rest of the business continues to move faster.

In that context, the challenge is less about expertise and more about systems. Privacy teams don’t need more frameworks; they need ways to operationalize governance in practice, with automation, enforcement, and control built into how data is actually used.

Agentic models offer one path forward, not as another layer of abstraction, but as a way to embed governance directly into workflows. That direction also came through in conversations and demos on the floor, where teams responded less to the idea of AI itself and more to its ability to reduce manual work while maintaining clear guardrails.


Interested in learning how Transcend can drive more confident AI deployments? Book a demo today.




By Leah Walling

Senior Manager, ABM and Field Marketing

Share this article