By Ben Brook
July 17, 2023•2 min read
As the data governance provider for some of the world’s largest companies, we’ve been fielding a lot of questions about how to implement effective AI governance systems. In other words, how to establish the right safeguards to responsibly use new generative AI platforms and tools.
We know that businesses want to move fast, but need to feel confident and secure in their usage of foundation models, and in particular with large language models (LLMs). We also know that using AI tools without the appropriate safeguards can make businesses vulnerable to risks surrounding confidentiality, data privacy, consumer safety, brand trustworthiness, intellectual property, and more.
In fact, a recent Workday survey found:
Nearly half (48%) of respondents cited security and privacy concerns as the main barriers to AI implementation.
Axios also recently reported on a survey that 70% of employees using ChatGPT haven’t told their bosses, and another survey found that sensitive data accounts for 25% of company data employees share with ChatGPT.
And we’ve heard these same concerns mirrored amongst our customers as they look to instill more confidence about AI usage across their companies or to get started with AI at all.
At its core, these concerns are about the data going into AI models, and the data coming out.
To mitigate these risks, we believe effective AI governance requires that all “talk” (data in and out) between a business and LLMs should occur through a proxy, or “man-in-the-middle.” This approach offers auditability and the ability to set and proliferate governance policies.
From an architectural point of view, we believe that an AI middleware layer is elegant, easy to scale, and effective.
It’s also one of those things that’s best to adopt early on. As more and more products and internal tools are built with direct integrations into third-party models, switching to a middleware architecture becomes an increasingly complex migration project.
Since the very early days of Transcend, we’ve been focused on building infrastructure that guarantees data will flow according to policy. Having a written data usage policy isn’t enough—technical safeguards must also be put in place.
We’ve helped incredible organizations, from the fastest-growing startups to Fortune 100 companies, navigate governance for personal data deletion and access, consent management for web, mobile, and connected devices, data discovery, content classification, and more.
In fact, we’re already powering privacy for Jasper, the industry leading generative AI content platform for businesses. With this as the backdrop, it’s only natural that we extend our commitment to best-in-class governance for our customers as they explore using new AI technologies.
Over the next few weeks, we'll lay out our roadmap for responsible AI governance. And you’ll see that our approach follows the foundations that are key to all Transcend products to date, including:
We believe AI has the potential to radically change how work is done, across the board. We also know that to stay ahead in deeply competitive markets, companies need to move fast—integrating AI tools not only into their day-to-day workflows, but their broader market strategies.
Infrastructure that provides clear guardrails is the key to unlocking this speed, offering a critical foundation for responsible and sustainable AI use throughout an entire organization.
We’re committed to helping businesses navigate the complexities of this new frontier, and that’s why we’re building a robust model for AI governance. If you’d like to stay up to date on Transcend’s AI governance roadmap, please sign up here.
By Ben Brook