4 AI Governance Recommendations for Privacy Professionals

By Morgan Sullivan

Senior Content Marketing Manager II

July 18, 20245 min read

Share this article

Privacy professionals – with their years of experience in building effective data governance structures and compliance initiatives – are perfectly situated to lead the AI governance charge.

But even as there’s more clarity on who’s responsible for AI governance, we often still hear the question: How do we get started?

At Transcend, we believe AI governance requires a holistic approach—a perspective that’s been enforced by our conversations with privacy and AI leaders, engagements at key strategic conferences like IAPP’s AI Governance Global, and in our work with some of the world’s largest AI companies.

To govern AI effectively, you need to look beyond policies and protocols: engaging deeply across your entire data governance program and operating at the code-layer wherever possible.

In IDC's June 2024 Market Share report, which explored recent shifts in the privacy software market against the backdrop of AI, analyst Ryan O’Leary put it simply:

“Glorified spreadsheets will no longer be acceptable forms of software.”

Expanding on this, Transcend CEO Ben Brook, noted:

“It’s impossible to govern the future of AI with surveys and manual operations. We’re seeing companies transition their AI governance operations from manual processes to the code layer, helping to reduce risk, improve operational efficiency, power business innovation, and grow customer trust.”

With the EU AI Act finalized and Colorado having passed the first US state AI law, enterprises will need to act fast to be ready for the coming wave of AI enforcement. Here’s what we recommend.

1. Conduct smarter risk assessments

The EU AI Act requires that deployers and developers of high-risk AI systems conduct both a Conformity Assessment and a Fundamental Rights Impact Assessment (FRIA).

A conformity assessment is meant to identify whether a high-risk AI deployer is conforming to the requirements set out within the Act, while the FRIA is intended to preemptively identify the hazards AI systems may pose to public safety and consumer privacy.

For both types of assessments, proactive risk identification is key.

Using a next-generation privacy solution like Transcend Assessments may be able to help. Our out-of-the-box AI Impact Assessment template, as well as our newly released EU AI Act template simplifies the process end-to-end—empowering your enterprise assessments program to seamlessly identify and mitigate AI risks.

2. Switch from “survey methods” to real-time data discovery and classification

As with most data governance regulations, developing a single source of data truth will make every aspect of EU AI Act compliance simpler.

An up-to-date data inventory enables your compliance team to first conduct an in-depth gap analysis against the law’s requirements, and then offers ongoing real-time insights into where and how personal data is used throughout your organization.

Switching from manual data mapping to real-time data classification and discovery is key to building that unified data inventory. Traditional methods of data mapping i.e. surveying cross functional partners to capture data systems, types, and purposes of processing in a spreadsheet are time-intensive and error prone.

Meanwhile, with a legacy privacy platform, you’re still often required to conduct extensive internal surveys of data owners, as well as internal enablement sessions to support proper survey completion. With both options, the data mapping process can take months to complete—and all too often, by the time you’re done, the inventory you just finished is already out-of-date.

Transcend’s suite of data discovery and classification solutions empowers you to automatically discover and classify personal data across your entire data ecosystem—no matter how sprawling or complex.

3. Consider technical-level policy monitoring and enforcement

Not every company will need technical guardrails for their AI systems or deployments, but for those who do—these guardrails will prove invaluable for enforcing accountability and ensuring AI data is handled ethically throughout its lifecycle.

Many companies already have, or are working to create, policies around employee use and internal deployment of AI. Though this is a step in the right direction, policies are only as effective as their enforcement—and according to the research, enforcement is not yet where it needs to be. In the IDC June 2024 Market Share report, analyst Ryan O’Leary noted:

“54% respondents [in the IDC June 2024 Privacy and Security Survey] currently have guidelines around AI, but only 54% are actually enforcing AI governance guidelines.”

Transcend Pathfinder is our solution for enterprises looking to put technical guardrails on AI use and deployment. An award-winning and patent-pending middleware layer, Pathfinder operates on the code-level to govern all data going into and out of large language models (LLM). 

With strong technical guardrails in place – automatically enforcing company policies day-in and day-out – enterprises can adopt new AI technologies with confidence, knowing that proprietary, confidential, and/or sensitive data is safe.

AI regulators have emphasized the importance of user consent in AI, focusing on two main threads:

  • Consent for use of personal data in AI training
  • Consent for automated decision-making

Building on the existing EU Text and Data Mining Directive (TDM), the EU AI Act mandates that data used for machine learning must be legally obtained and that data owners have not opted out of its use. While the specific details of the opt-out mechanism are still being defined, the TDM is already in effect, and its directives are currently enforceable.

In the US, the AI CONSENT Act, currently introduced but not yet passed, mandates that online platforms must “obtain consumers’ express informed consent before using their personal data to train artificial intelligence (AI) models.” While this bill is still pending and not yet a legal requirement, it underscores a growing trend in Washington D.C. towards legislation that prioritizes consent protection.

Against this backdrop, forward-thinking enterprises will implement solutions to manage and enforce consent preferences for users who don’t want their data used for AI training or who want to opt out of automated decision-making. Unlike legacy consent solutions that rely on manual methods, Transcend Consent Management provides automated data flow regulation and granular classification—enabling comprehensive consent management at the code level. We already power consent management for some of the world’s largest AI companies.

About Transcend

Ensure responsible governance to fuel your enterprise’s AI innovation.

Transcend’s next-gen data and privacy governance platform helps companies of all sizes, from Fortune 500s to high-growth startups, better govern their data—improving compliance, cutting costs, enhancing business innovation and increasing customer trust.

Get in touch with the Transcend team today to learn more about the new EU AI Act and supporting Assessment Template.



By Morgan Sullivan

Senior Content Marketing Manager II

Share this article