By Phyllis Fang
June 7, 2024•2 min read
2024's IAPP’s AI Governance Global (AIGG) in Brussels was yet another productive convening for privacy professionals grappling with AI governance. We heard from numerous privacy executives from all over the world, and four key lessons emerged from the collective industry. Here’s your executive summary, if you missed the conference or want to strategically compare notes on your experience.
Four key lessons from IAPP AIGG:
Last year, at the first IAPP AI Governance Global last year, discussions were largely theoretical, focusing on philosophical debates about AI's long-term impact, ethical guidelines, and risks like bias and privacy. However, in its second year, the need for practical strategies has become clear.
Privacy professionals are eager for actionable approaches to advance AI while ensuring data integrity and security. With significant developments like the EU AI Act and new US regulations, there's a growing demand for global collaboration and clear, practical guidelines that aren't overly burdensome. Keep reading for our practical framework for AI Governance.
Two related questions continued to crop up throughout the conference: Where do privacy and AI governance overlap? And how should privacy professionals address those pieces that don’t fall under a traditional privacy purview?
In terms of overlap, privacy and AI governance share the need to provide ways to access and erase consumer data, provide opportunities for opt-out, maintain compliant data retention policies, and ensure robust data security. Many of the workflows and actions privacy professionals have established in the years since GDPR was passed can be applied to governing AI.
While the areas where privacy and AI governance diverge are indeed material – issues surrounding copyright and IP, bias in AI outputs, and potential for social manipulation – the experience privacy professionals have from navigating new laws and operating across a wide spectrum of risk can be easily enhanced to address the unique compliance considerations of AI systems.
In fact, one standout moment was the “Practitioner's Guide to AI Governance: Insights from Legal, AI & Privacy Leaders” panel—featuring Ben Brook, Transcend Co-founder & CEO, Alex Rindels, Head of Legal at Jasper.ai, Yann Padova, Partner, Wilson Sonsini Goodrich & Rosati, IAPP Country Leader for France, and Alan Wilemon, Director at INQ Consulting.
With nearly 150 attendees, this session brought together a roundtable of advisors from the front lines of AI governance, offering expert, on-the-ground insights from year one of the AI revolution. Our experts covered:
Learn more by reviewing the slides from the panel.
Training AI and operationalizing data minimization are inherently conflicting objectives. Businesses developing AI systems face the challenge of balancing the need for vast amounts of data to support machine learning training and development with the necessity to adhere to this foundational privacy principle.
This balancing act is crucial as it directly impacts both the effectiveness of AI models and the privacy of individuals. Privacy leaders must help companies navigate this tension with clear eyes and a collaborative approach to ensure they can be a conduit for AI development and business innovation, while respecting privacy regulations and minimizing data usage to the extent necessary.
As the EU AI Act gets its footing and regulators pursue enforcement against AI deployers under existing privacy laws, it’s become increasingly evident to privacy professionals that legacy privacy tooling simply can’t keep pace.
When introducing keynote speakers Nick Clegg, Meta’s President of Global Affairs, and Alexandra Reeve Given, President & CEO, Center for Democracy & Technology, Transcend’s CEO Ben Brook noted:
"The world-changing power of AI will raise the bar for all of us as governance professionals. Superficial box-checking and band-aid fixes will no longer cut it."
It’s now more clear than ever that, to effectively rise to the challenge of AI governance, leading organizations need a next-generation privacy partner that manages data at the code level.
We typically see companies follow four crucial steps to establish effective AI governance:
After speaking with hundreds of privacy and data governance professionals over the course of IAPP AIGG, we’re even more convinced of the value of this approach.
By Phyllis Fang