Senior Content Marketing Manager II
August 25, 2023â˘11 min read
AI governance is the process of building technical guardrails around how an organization deploys and engages with artificial intelligence (AI) tools. Applied at the code level, effective AI governance helps organizations observe, audit, manage, and limit the data going into and out of AI systems.Â
With these technical safeguards in place, enterprises can better address concerns like:Â
And these concerns are not unfounded. According to one study, sensitive data accounts for 25% of the information employees share with ChatGPT.
To reiterate, AI governance should be considered at the code-level. While managing and controlling AI happens at multiple levelsâcomprehensive regulation, risk frameworks, company culture, and moreâthese pieces, while important, only apply to the theoretical use of AI.
To be effective, AI governance needs to be implemented in the form of technical safeguardsâdirectly operating on and interfacing with your companyâs AI deployments.
As AI becomes integral to modern business, companies face two distinct challenges: maintaining auditability and control over their AI engagements, and managing the inherent risks.Â
Without the appropriate safeguards, enterprises using AI can become open themselves to risk in areas like confidentiality, data privacy, consumer safety, brand trustworthiness, intellectual property, and more.Â
AI governance is important, because it allows organizations to better manage these risks, while also enabling:
Imagine if, when the internet was invented, we knew there would be privacy and data protection regulations. Imagine how different data collection, storage, and processing would be treated across every industry.Â
Databases would treat personal information as special classes. Weâd have built entire systems differentlyâwith privacy and data protection controls baked in from the start. But without that foresight, enterprises and tech giants are now having to take a retroactive approach.Â
Now, we have the opportunity to do things differently, because we know AI will be regulated. AI governance is important because itâs the means through which weâll seize that opportunity.Â
To evaluate the efficacy of your AI governance program, your organization will need to define metrics for success. As with most concepts in this guide, what this looks like in practice will depend on your organizationâs size, resources, and AI deployments.Â
In other words, AI governance success will look a little different for every organization.Â
But there are a few key things to consider when deciding what KPIs to use to measure the success of your governance program.Â
Are you able to easily monitor and control the data flows between your organizationâs third party applications and LLMs? When it comes to data protection and privacy, one of the biggest compliance challenges is simply understanding where data is being collected, where it lives, how itâs being used, and how long itâs been retained.Â
You canât govern data you canât see, so implementing AI governance that maintains a high level of data hygiene is critical.Â
As your business deploys AI more broadly, being able to audit and track AI interactions is key. Your governance structures should be able to provide logs and usage historiesâallowing your organization to identify and understand how AI tools are used.Â
Auditability is especially important during incident response, when evaluating algorithmic bias, and for achieving greater transparency and explainability for your AI models.Â
Proactive monitoring and alerts for certain data types or keywords can help companies catch function creep in AI models, as well as keep sensitive information, such as health records or financial data, out of LLMs. An effective AI governance tool will provide these capabilities, empowering businesses to take immediate action when issues arise.
Policy enforcement involves setting and enforcing rules on data sent to LLMs. Policies might dictate that certain types of data, like credit card numbers, never leave the company's network perimeter. AI governance software should be able to enforce these policies, blocking or altering requests as necessary to ensure data privacy and security.
While not a strict requirement for AI governance, implementing controls that give your business macro-level visibility can be quite helpfulâgiving your teams more to better understand, attribute, and control AI-related costs.
AI governance is a new frontier, so thereâs no easy answer to the question of whoâs responsible for implementing these systems. That said, if we use privacy, data protection, and security as our modelsâitâs clear that implementing effective AI governance requires a cross-functional approach.
At the highest level, the responsibility of AI governance falls on a companyâs CEO and Board of Directors. Though the CEO likely wonât implement governance measures themselves, defining AI governance as a priority and then delegating within the organization does fall on them.Â
In terms of more functional responsibilities, implementing AI governance will likely fall to a tiger team made of some combination of the following roles.Â
This individual would be responsible for tracking relevant legislation, synthesizing the requirements within, and sharing that information with relevant stakeholders throughout the organization. Basically, counsel will make sure the company is abreast of any new legal requirementsâand that those requirements are known by those implementing code-level governance at your company.Â
The CISO would inform the vision and strategy on how to ensure that proprietary information and technology is protected throughout the organizationâs AI deployments.Â
Though there is some overlap between the duties of a CISO and a CPO, in the context of AI governance, a CPO would likely put their focus on the movement and use of personal and sensitive personal information within AI systems.Â
Their involvement would become especially important if your organization deploys AI tools that interact with consumers directly, processes consumer data, or employs automated decision making and profiling.
How your organization implements AI governance will likely come down to two options: building the necessary technical safeguards in-house or deploying an AI governance software. With either option, youâll likely need people-power from your engineering teams.Â
Building your governance structures in-house will obviously require greater engineering resources. But even if you opt to implement an AI governance software, youâll still likely need cooperation from your eng teams in order to complete the deployment.Â
There are two primary AI frameworks currently in use today: the NIST AI Risk Management Framework and the OECD Framework for Classifying AI Systems. Though there is some overlap between the two, both offer different perspectives and can be applied quite differently.Â
Something to keep in mind is that both frameworks, though helpful for thinking about risk in the context of AI, can be applied most effectively to predictive AI (vs. generative AI). More than that, as AI is evolving at record pace, these frameworks are likely to see revisions in the not so distant future.Â
That said, both are valuable as a foundational step for understanding AI risks and classifying different types of AI systems.Â
The NIST AI Risk Management Framework (RMF) is a voluntary, non-sector specific tool meant to help AI creators, âmitigate risk, unlock opportunity, and raise the trustworthiness of their AI systems from design through to deployment.â
General enough to be used across different industries, the overarching goal of the NIST AI Risk Management Framework is to minimize AI risk and support the responsible building and deployment of AI systems.Â
The framework has two main components: planning/understanding and actionable guidance.Â
In the planning and understanding section, the framework helps organizations analyze the risks and benefits of their AI systems, offers suggestions on how to define trustworthy AI systems, and outlines several characteristics of a trustworthy system.Â
According to the framework, a trustworthy AI system is:Â
The second part of the framework covers actionable guidance, which revolves around four main components: governing, mapping, measuring, and managing.Â
Governing is at the center of these components, calling for a culture of risk management that is âcultivated and present.âÂ
Mapping refers to recognizing context and identifying risks, while measuring looks to assessing, analyzing, and tracking those risks. Managing applies to both mapping and measuring, focusing on the need to prioritize and address risks based on their potential impact.
The OECD Framework for Classifying AI Systems offers guidance on how to characterize AI tools, while seeking to establish a common understanding of AI systems. This framework considers AI systems from five different perspectives.Â
This framework is meant to support discussions around regulation and policy, as well as to guide AI creators towards building tools responsibly, while assessing any relevant risks.Â
One of the biggest questions about AI right now concerns what future regulation will look like. Unlike the internet, itâs been clear from the get-go that AI/ML technology will be regulatedâwith lawmakers across the globe sending early signals that AI is on their radar.Â
Below weâll cover the EU AI Act, Chinaâs piecemeal moves to control AI, as well as New Yorkâs AI bias law. As of now, thereâs not enough movement on a US federal law, but weâll be sure to update this post if and when that changes.
The EU AI Act is one of the first comprehensive AI laws proposed by a global regulator. Passed as a draft law in July 2023, the AI Act would require greater transparency from AI creators like OpenAI, Google, and Meta, while also significantly limiting the use of facial recognition software.Â
The draft version of the EU AI Act outlines four different levels of AI riskâunacceptable risk, high risk, limited risk, and minimal or no riskâand outlines significant penalties for those found to be non-compliant. Under the draft, violators could face fines of up to 6% of global profits or up to 30 million euros.Â
Some commentators have noted that, as the EU AI Act is a horizontal piece of legislation (shallowly applying to a broad scope of AI applications), technical standards will be key to eventual compliance. This approach will rely heavily on specific regulators, courts, and developers to determine what compliance will look like in practical terms.
The draft lawâs broad, yet flexible nature will help to alleviate some of the issues involved with maintaining regulatory pace against a quickly evolving industry.Â
However, it also poses certain risks in that regional regulators often take different approaches to practical compliance measures, and implementing the necessary technical safeguards will rely heavily on developers within the tech space.Â
China, so far, has taken a more vertical approachâpassing regulations that target specific algorithms and applications.Â
One of these regulations focuses on recommendation algorithms, requiring that creators ensure these systems emphasize positive information and limit discrimination or excessive workloads for employees. Another targets deep synthesis algorithms (generative AI), and puts specific focus on deepfakes and consent for use of an individual's PI within these systems.Â
Though these specific regulations do take a more vertical approach, one piece of Chinaâs recent AI lawsâthe algorithm registryâmay have more horizontal implications. Under both of the laws mentioned above, developers are required to register their algorithms. This centralized database will allow regulators to assess security risks, training data sources, as well as details on how these models are built and maintained. Â
Another common theme between Chinaâs AI regulations is the focus on national unity. Generative AI creators are required to create products that support the stateâs power, disseminate positive information about the state, and maintain national unity and security.Â
New York City recently passed a new law meant to mitigate AI bias in the hiring and employment process. This law requires that businesses employ third-parties to run bias audits of their AI systems, with specific focus on race and gender.Â
The law also imposes new requirements around transparencyârequiring the businesses tell job seekers that an automated employment decision tool is part of their employment process.Â
Unless these requirements are met, documented, and the results of the audit made public on a companyâs websites, New Yorkâs new law prohibits the use of automated decision making tools.Â
Another thing to consider is that, though headlines paint AI regulation as being very up in the air, there are laws already in place that govern certain aspects of compliance.Â
The FTC has made it known that they are already pursuing enforcement actions under existing statues. More than that, most existing state privacy laws also include language around the use of automated profiling and decision making, which often relies on AI tools.Â
While itâs important the company stay in-the-know on AI regulation coming down the pipeline, risks around AI deployment are already accruingâso savvy companies wonât wait to get their governance programs well in hand.Â
Transcend is the governance layer for enterprise dataâhelping companies automate and future-proof their privacy compliance and implement robust AI governance across an entire tech stack.
Transcend Pathfinder gives your company the technical guardrails to adopt new AI technologies with confidence. While Transcend Data Mapping goes beyond observability to power your privacy program with smart governance suggestions.
Ensure nothing is tracked without user consent using Transcend Consent, automate data subject request workflows with Privacy Requests, and mitigate risk with smarter privacy Assessments.
Senior Content Marketing Manager II