4 Lessons You Need To Know from IAPP AI Governance Global 2024

June 7, 2024•2 min read

Share this article

2024's IAPP’s AI Governance Global (AIGG) in Brussels was yet another productive convening for privacy professionals grappling with AI governance. We heard from numerous privacy executives from all over the world, and four key lessons emerged from the collective industry. Here’s your executive summary, if you missed the conference or want to strategically compare notes on your experience.

Four key lessons from IAPP AIGG:

  1. The shift from theoretical to practical is underway
  2. Privacy professionals are best suited to tackle AI governance challenges
  3. Training AI and data minimization are at odds
  4. AI governance needs next-generation data governance

The shift from theoretical to practical is underway

Last year, at the first IAPP AI Governance Global last year, discussions were largely theoretical, focusing on philosophical debates about AI's long-term impact, ethical guidelines, and risks like bias and privacy. However, in its second year, the need for practical strategies has become clear.

Privacy professionals are eager for actionable approaches to advance AI while ensuring data integrity and security. With significant developments like the EU AI Act and new US regulations, there's a growing demand for global collaboration and clear, practical guidelines that aren't overly burdensome. Keep reading for our practical framework for AI Governance.

Privacy professionals are best suited to tackle AI governance challenges

Two related questions continued to crop up throughout the conference: Where do privacy and AI governance overlap? And how should privacy professionals address those pieces that don’t fall under a traditional privacy purview?

In terms of overlap, privacy and AI governance share the need to provide ways to access and erase consumer data, provide opportunities for opt-out, maintain compliant data retention policies, and ensure robust data security. Many of the workflows and actions privacy professionals have established in the years since GDPR was passed can be applied to governing AI.

While the areas where privacy and AI governance diverge are indeed material – issues surrounding copyright and IP, bias in AI outputs, and potential for social manipulation – the experience privacy professionals have from navigating new laws and operating across a wide spectrum of risk can be easily enhanced to address the unique compliance considerations of AI systems.

In fact, one standout moment was the “Practitioner's Guide to AI Governance: Insights from Legal, AI & Privacy Leaders” panel—featuring Ben Brook, Transcend Co-founder & CEO, Alex Rindels, Head of Legal at Jasper.ai, Yann Padova, Partner, Wilson Sonsini Goodrich & Rosati, IAPP Country Leader for France, and Alan Wilemon, Director at INQ Consulting.

With nearly 150 attendees, this session brought together a roundtable of advisors from the front lines of AI governance, offering expert, on-the-ground insights from year one of the AI revolution. Our experts covered:

  • Different approaches to AI governance—from practical policies to technical guardrails
  • Where AI governance overlaps, where it doesn’t, and how privacy professionals can bridge the gap with strong cross-functional teams
  • AI risks within privacy, as well as those that fall outside privacy’s purview
  • Regulators’ main focus and the resulting imminent risks for AI deployers

Learn more by reviewing the slides from the panel.

Training AI and data minimization are at odds

Training AI and operationalizing data minimization are inherently conflicting objectives. Businesses developing AI systems face the challenge of balancing the need for vast amounts of data to support machine learning training and development with the necessity to adhere to this foundational privacy principle.

This balancing act is crucial as it directly impacts both the effectiveness of AI models and the privacy of individuals. Privacy leaders must help companies navigate this tension with clear eyes and a collaborative approach to ensure they can be a conduit for AI development and business innovation, while respecting privacy regulations and minimizing data usage to the extent necessary.

AI governance needs next-generation data governance

As the EU AI Act gets its footing and regulators pursue enforcement against AI deployers under existing privacy laws, it’s become increasingly evident to privacy professionals that legacy privacy tooling simply can’t keep pace.

When introducing keynote speakers Nick Clegg, Meta’s President of Global Affairs, and Alexandra Reeve Given, President & CEO, Center for Democracy & Technology, Transcend’s CEO Ben Brook noted:

"The world-changing power of AI will raise the bar for all of us as governance professionals. Superficial box-checking and band-aid fixes will no longer cut it."


It’s now more clear than ever that, to effectively rise to the challenge of AI governance, leading organizations need a next-generation privacy partner that manages data at the code level.

We typically see companies follow four crucial steps to establish effective AI governance:

  1. Deploy proactive risk management tools, like smart risk assessments, in order to identify potential AI hazards to public safety and privacy as mandated by the EU AI Act.
  2. Transition from outdated “survey methods” to real-time data classification in order to establish a single, accurate source of data that teams can use to track personal data use across the organization.
  3. Ensure accountability and ethical data handling by monitoring and enforcing policies, especially for large language models (LLMs), at the technical level.
  4. Implement solutions to manage and enforce consent preferences, allowing users to opt-out of AI training or automated decision-making, as AI regulations emphasize the importance of user consent.

After speaking with hundreds of privacy and data governance professionals over the course of IAPP AIGG, we’re even more convinced of the value of this approach.


Share this article