Navigating AI Governance: Building Robust Frameworks for AI Deployment

AI governance at a glance

  • AI governance is the process of building technical guardrails around how an organization deploys and engages with artificial intelligence (AI) tools.

  • Applied at the code level, effective AI governance helps organizations observe, audit, manage, and limit the data going into and out of AI systems. 

  • While incoming AI regulation, like the EU AI Act, will impose more obligations on AI creators, there are already several laws and frameworks (such as the NIST AI Risk Framework) that offer organizations guidance on how to shape their AI governance programs.

  • Ultimately, an effective AI governance system will rely on strict data hygiene, which will provide your organization with better auditability, monitoring and alerting, policy enforcement, and insights into their AI deployments.

Table of contents

What is AI governance?

AI governance is the process of building technical guardrails around how an organization deploys and engages with artificial intelligence (AI) tools. Applied at the code level, effective AI governance helps organizations observe, audit, manage, and limit the data going into and out of AI systems. 

With these technical safeguards in place, enterprises can better address concerns like: 

  • Employees sharing confidential information with large-language models (LLM)

  • AI chatbots sending inappropriate responses to users

  • Accidentally sending sensitive data into LLMs via third-party applications

And these concerns are not unfounded. According to one study, sensitive data accounts for 25% of the information employees share with ChatGPT.

To reiterate, AI governance should be considered at the code-level. While managing and controlling AI happens at multiple levels—comprehensive regulation, risk frameworks, company culture, and more—these pieces, while important, only apply to the theoretical use of AI.

To be effective, AI governance needs to be implemented in the form of technical safeguards—directly operating on and interfacing with your company’s AI deployments.

Adopt AI with confidence

In a large enterprise, there can be hundreds of applications, each potentially connected to multiple LLMs. Maintaining direct integrations can lead to a complex and unwieldy system that’s challenging to secure.

Pathfinder provides a single technical control for data going into large language models, and the data coming out.

Explore Transcend Pathfinder

 Why is AI governance important?

As AI becomes integral to modern business, companies face two distinct challenges: maintaining auditability and control over their AI engagements, and managing the inherent risks. 

Without the appropriate safeguards, enterprises using AI can become open themselves to risk in areas like confidentiality, data privacy, consumer safety, brand trustworthiness, intellectual property, and more. 

AI governance is important, because it allows organizations to better manage these risks, while also enabling:

  • Greater visibility and monitoring into AI deployments

  • Technical safeguards that help to future-proof compliance

  • Auditability through evidence-based documentation

  • Analysis of AI use across the organization

  • Policy proliferation and enforcement

  • A single pane of glass that supports better communication and collaboration between cross-functional teams engaged in building, managing, and deploying AI tools

  • More flexibility as the AI landscape continues to shift

Imagine if, when the internet was invented, we knew there would be privacy and data protection regulations. Imagine how different data collection, storage, and processing would be treated across every industry. 

Databases would treat personal information as special classes. We’d have built entire systems differently—with privacy and data protection controls baked in from the start. But without that foresight, enterprises and tech giants are now having to take a retroactive approach. 

Now, we have the opportunity to do things differently, because we know AI will be regulated. AI governance is important because it’s the means through which we’ll seize that opportunity. 

5 pillars of successful AI governance

To evaluate the efficacy of your AI governance program, your organization will need to define metrics for success. As with most concepts in this guide, what this looks like in practice will depend on your organization’s size, resources, and AI deployments. 

In other words, AI governance success will look a little different for every organization. 

But there are a few key things to consider when deciding what KPIs to use to measure the success of your governance program. 

Data hygiene

Are you able to easily monitor and control the data flows between your organization’s third party applications and LLMs? When it comes to data protection and privacy, one of the biggest compliance challenges is simply understanding where data is being collected, where it lives, how it’s being used, and how long it’s been retained. 

You can’t govern data you can’t see, so implementing AI governance that maintains a high level of data hygiene is critical. 

Auditability

As your business deploys AI more broadly, being able to audit and track AI interactions is key. Your governance structures should be able to provide logs and usage histories—allowing your organization to identify and understand how AI tools are used. 

Auditability is especially important during incident response, when evaluating algorithmic bias, and for achieving greater transparency and explainability for your AI models. 

Monitoring and alerting

Proactive monitoring and alerts for certain data types or keywords can help companies catch function creep in AI models, as well as keep sensitive information, such as health records or financial data, out of LLMs. An effective AI governance tool will provide these capabilities, empowering businesses to take immediate action when issues arise.

Policy enforcement

Policy enforcement involves setting and enforcing rules on data sent to LLMs. Policies might dictate that certain types of data, like credit card numbers, never leave the company's network perimeter. AI governance software should be able to enforce these policies, blocking or altering requests as necessary to ensure data privacy and security.

Insights

While not a strict requirement for AI governance, implementing controls that give your business macro-level visibility can be quite helpful—giving your teams more to better understand, attribute, and control AI-related costs.

Transcend's approach to AI governance

As the data governance provider for some of the world’s largest companies, we know that businesses want to move fast to deploy AI tools—but need to feel confident and secure in their usage of foundation models, and in particular with large language models (LLMs).

That's why, to help business deploy AI with confidence, we're building an AI governance software that moves all “talk” (data in and out) between a business and LLMs through a proxy, or “man-in-the-middle.” 

Learn more

Who’s responsible for AI governance?

AI governance is a new frontier, so there’s no easy answer to the question of who’s responsible for implementing these systems. That said, if we use privacy, data protection, and security as our models—it’s clear that implementing effective AI governance requires a cross-functional approach.

At the highest level, the responsibility of AI governance falls on a company’s CEO and Board of Directors. Though the CEO likely won’t implement governance measures themselves, defining AI governance as a priority and then delegating within the organization does fall on them. 

In terms of more functional responsibilities, implementing AI governance will likely fall to a tiger team made of some combination of the following roles. 

General, Privacy, or Security Counsel

This individual would  be responsible for tracking relevant legislation, synthesizing the requirements within, and sharing that information with relevant stakeholders throughout the organization. Basically, counsel will make sure the company is abreast of any new legal requirements—and that those requirements are known by those implementing code-level governance at your company. 

Chief Information Security Officer (CISO)

The CISO would inform the vision and strategy on how to ensure that proprietary information and technology is protected throughout the organization’s AI deployments. 

Chief Privacy Officer (CPO)

Though there is some overlap between the duties of a CISO and a CPO, in the context of AI governance, a CPO would likely put their focus on the movement and use of personal and sensitive personal information within AI systems. 

Their involvement would become especially important if your organization deploys AI tools that interact with consumers directly, processes consumer data, or employs automated decision making and profiling.

Engineering Lead

How your organization implements AI governance will likely come down to two options: building the necessary technical safeguards in-house or deploying an AI governance software. With either option, you’ll likely need people-power from your engineering teams. 

Building your governance structures in-house will obviously require greater engineering resources. But even if you opt to implement an AI governance software, you’ll still likely need cooperation from your eng teams in order to complete the deployment. 

Existing AI governance frameworks

There are two primary AI frameworks currently in use today: the NIST AI Risk Management Framework and the OECD Framework for Classifying AI Systems. Though there is some overlap between the two, both offer different perspectives and can be applied quite differently. 

Something to keep in mind is that both frameworks, though helpful for thinking about risk in the context of AI, can be applied most effectively to predictive AI (vs. generative AI). More than that, as AI is evolving at record pace, these frameworks are likely to see revisions in the not so distant future. 

That said, both are valuable as a foundational step for understanding AI risks and classifying different types of AI systems. 

NIST AI Risk Management Framework

The NIST AI Risk Management Framework (RMF) is a voluntary, non-sector specific tool meant to help AI creators, “mitigate risk, unlock opportunity, and raise the trustworthiness of their AI systems from design through to deployment.”

General enough to be used across different industries, the overarching goal of the NIST AI Risk Management Framework is to minimize AI risk and support the responsible building and deployment of AI systems. 

The framework has two main components: planning/understanding and actionable guidance. 

In the planning and understanding section, the framework helps organizations analyze the risks and benefits of their AI systems, offers suggestions on how to define trustworthy AI systems, and outlines several characteristics of a trustworthy system. 

According to the framework, a trustworthy AI system is: 

  • Valid and reliable

  • Safe, secure, and resilient

  • Accountable and transparent

  • Explainable and interpretable

  • Privacy enhanced

  • Fair, with harmful biases managed

The second part of the framework covers actionable guidance, which revolves around four main components: governing, mapping, measuring, and managing. 

Governing is at the center of these components, calling for a culture of risk management that is “cultivated and present.” 

Mapping refers to recognizing context and identifying risks, while measuring looks to assessing, analyzing, and tracking those risks. Managing applies to both mapping and measuring, focusing on the need to prioritize and address risks based on their potential impact.

OECD Framework for Classifying AI Systems

The OECD Framework for Classifying AI Systems offers guidance on how to characterize AI tools, while seeking to establish a common understanding of AI systems. This framework considers AI systems from five different perspectives. 

  • People and planet: How AI systems impact the environment, society, and individuals

  • Economic context: How AI impacts the job market, employee productivity, and market competition

  • Data & input: What data is going into AI systems and how is that data being governed

  • AI Model: Does an AI system’s technical landscape allow for explainability, robustness, and transparency

  • Task & function: What does an AI system do? 

This framework is meant to support discussions around regulation and policy, as well as to guide AI creators towards building tools responsibly, while assessing any relevant risks. 

AI governance regulation

One of the biggest questions about AI right now concerns what future regulation will look like. Unlike the internet, it’s been clear from the get-go that AI/ML technology will be regulated—with lawmakers across the globe sending early signals that AI is on their radar. 

Below we’ll cover the EU AI Act, China’s piecemeal moves to control AI, as well as New York’s AI bias law. As of now, there’s not enough movement on a US federal law, but we’ll be sure to update this post if and when that changes.

EU AI Act

The EU AI Act is one of the first comprehensive AI laws proposed by a global regulator. Passed as a draft law in July 2023, the AI Act would require greater transparency from AI creators like OpenAI, Google, and Meta, while also significantly limiting the use of facial recognition software. 

The draft version of the EU AI Act outlines four different levels of AI risk—unacceptable risk, high risk, limited risk, and minimal or no risk—and outlines significant penalties for those found to be non-compliant. Under the draft, violators could face fines of up to 6% of global profits or up to 30 million euros. 

Some commentators have noted that, as the EU AI Act is a horizontal piece of legislation (shallowly applying to a broad scope of AI applications), technical standards will be key to eventual compliance. This approach will rely heavily on specific regulators, courts, and developers to determine what compliance will look like in practical terms.

The draft law’s broad, yet flexible nature will help to alleviate some of the issues involved with maintaining regulatory pace against a quickly evolving industry. 

However, it also poses certain risks in that regional regulators often take different approaches to practical compliance measures, and implementing the necessary technical safeguards will rely heavily on developers within the tech space. 

China

China, so far, has taken a more vertical approach—passing regulations that target specific algorithms and applications. 

One of these regulations focuses on recommendation algorithms, requiring that creators ensure these systems emphasize positive information and limit discrimination or excessive workloads for employees. Another targets deep synthesis algorithms (generative AI), and puts specific focus on deepfakes and consent for use of an individual's PI within these systems. 

Though these specific regulations do take a more vertical approach, one piece of China’s recent AI laws—the algorithm registry—may have more horizontal implications. Under both of the laws mentioned above, developers are required to register their algorithms. This centralized database will allow regulators to assess security risks, training data sources, as well as details on how these models are built and maintained.  

Another common theme between China’s AI regulations is the focus on national unity. Generative AI creators are required to create products that support the state’s power, disseminate positive information about the state, and maintain national unity and security. 

New York AI bias law

New York City recently passed a new law meant to mitigate AI bias in the hiring and employment process. This law requires that businesses employ third-parties to run bias audits of their AI systems, with specific focus on race and gender. 

The law also imposes new requirements around transparency—requiring the businesses tell job seekers that an automated employment decision tool is part of their employment process. 

Unless these requirements are met, documented, and the results of the audit made public on a company’s websites, New York’s new law prohibits the use of automated decision making tools. 

Current legislation already impacts AI

Another thing to consider is that, though headlines paint AI regulation as being very up in the air, there are laws already in place that govern certain aspects of compliance. 

The FTC has made it known that they are already pursuing enforcement actions under existing statues. More than that, most existing state privacy laws also include language around the use of automated profiling and decision making, which often relies on AI tools. 

While it’s important the company stay in-the-know on AI regulation coming down the pipeline, risks around AI deployment are already accruing—so savvy companies won’t wait to get their governance programs well in hand. 


About Transcend

Transcend is the governance layer for enterprise data—helping companies automate and future-proof their privacy compliance and implement robust AI governance across an entire tech stack.

Transcend Pathfinder gives your company the technical guardrails to adopt new AI technologies with confidence. While Transcend Data Mapping goes beyond observability to power your privacy program with smart governance suggestions.

Ensure nothing is tracked without user consent using Transcend Consent, automate data subject request workflows with Privacy Requests, and mitigate risk with smarter privacy Assessments.


Sources

Share this article

Discover more articles

Snippets

Sign up for Transcend's weekly privacy newsletter.

    By clicking "Sign Up" you agree to the processing of your personal data by Transcend as described in our Data Practices and Privacy Policy. You can unsubscribe at any time.

    Discover more articles