Transcend's approach to AI governance

July 17, 20232 min read

Share this article

AI governance—a new frontier

As the data governance provider for some of the world’s largest companies, we’ve been fielding a lot of questions about how to implement effective AI governance systems. In other words, how to establish the right safeguards to responsibly use new generative AI platforms and tools.

We know that businesses want to move fast, but need to feel confident and secure in their usage of foundation models, and in particular with large language models (LLMs). We also know that using AI tools without the appropriate safeguards can make businesses vulnerable to risks surrounding confidentiality, data privacy, consumer safety, brand trustworthiness, intellectual property, and more. 

In fact, a recent Workday survey found:

Nearly half (48%) of respondents cited security and privacy concerns as the main barriers to AI implementation.

Axios also recently reported on a survey that 70% of employees using ChatGPT haven’t told their bosses, and another survey found that sensitive data accounts for 25% of company data employees share with ChatGPT.

And we’ve heard these same concerns mirrored amongst our customers as they look to instill more confidence about AI usage across their companies or to get started with AI at all. 

  • “How can I monitor that my teams aren’t entering confidential information into ChatGPT?”
  • “How will I know if an AI chat feature is sending inappropriate information to an end-user?”
  • “How can I ensure PCI or HIPAA-covered data is not being sent into third-party LLMs?”

At its core, these concerns are about the data going into AI models, and the data coming out. 

An elegant and effective AI governance solution

To mitigate these risks, we believe effective AI governance requires that all “talk” (data in and out) between a business and LLMs should occur through a proxy, or “man-in-the-middle.” This approach offers auditability and the ability to set and proliferate governance policies. 

From an architectural point of view, we believe that an AI middleware layer is elegant, easy to scale, and effective.

It’s also one of those things that’s best to adopt early on. As more and more products and internal tools are built with direct integrations into third-party models, switching to a middleware architecture becomes an increasingly complex migration project.

Since the very early days of Transcend, we’ve been focused on building infrastructure that guarantees data will flow according to policy. Having a written data usage policy isn’t enough—technical safeguards must also be put in place.

We’ve helped incredible organizations, from the fastest-growing startups to Fortune 100 companies, navigate governance for personal data deletion and access, consent management for web, mobile, and connected devices, data discovery, content classification, and more.

In fact, we’re already powering privacy for Jasper, the industry leading generative AI content platform for businesses. With this as the backdrop, it’s only natural that we extend our commitment to best-in-class governance for our customers as they explore using new AI technologies. 

Laying out an AI governance roadmap

Over the next few weeks, we'll lay out our roadmap for responsible AI governance. And you’ll see that our approach follows the foundations that are key to all Transcend products to date, including:  

  • Technical safeguards: We believe the best way to manage AI risk is to layer technical controls at the data infrastructure level. 
  • Developer tools that give risk teams superpowers: Much like privacy, AI governance is ultimately a data problem. It’s only truly solvable by engineers, but since it’s non-core work we build systems that engineering can set and forget. After an easy installation, risk teams are empowered to work autonomously, without going through engineering for every task.
  • Observability and reporting: We believe it’s essential that businesses have clear observability of what is happening with AI deployments across their business. Furthermore, we know that great governance products provide great reporting to risk leaders.
  • Flexibility: Rules and regulations are interpreted and applied differently by different companies. At Transcend we believe that building in flexibility, while also providing off-the-shelf policy packs is the best way to support a business’s unique needs at scale. 

We believe AI has the potential to radically change how work is done, across the board. We also know that to stay ahead in deeply competitive markets, companies need to move fast—integrating AI tools not only into their day-to-day workflows, but their broader market strategies.

Infrastructure that provides clear guardrails is the key to unlocking this speed, offering a critical foundation for responsible and sustainable AI use throughout an entire organization. 

We’re committed to helping businesses navigate the complexities of this new frontier, and that’s why we’re building a robust model for AI governance. If you’d like to stay up to date on Transcend’s AI governance roadmap, please sign up here.


Learn more about AI governance at Transcend


Share this article