How to Minimize Data Mapping Resource Consumption Costs

By Brandon Chen

Senior Product Marketing Manager

August 22, 2024•3 min read

Share this article

Whether you’re generating a ROPA report, conducting a data protection impact assessment (DPIA), or fulfilling data subject access requests (DSR), most, if not all, privacy workflows depend on having a clear and accurate understanding of where personal data lives across your data ecosystem.

With data visibility’s critical importance in mind, many privacy vendors offer data mapping features or tools. But not all data mapping solutions are created equal. Legacy solutions rely on “point-in-time” scans that fail to keep up with the regular cadence of new data systems and tools. On the other hand, next-generation data mapping tools, like Transcend Silo Discovery, Structured Discovery, and Unstructured Discovery, use a combination of metadata and data sampling to automatically discover and classify data at scale.

The downsides of traditional data mapping approaches

Traditional privacy solutions focused on a narrow scope of business systems will scan all the data for a given column to understand whether there are any personal data points.

For example, you might be easily able to determine that a field named “social_security_number” is personal data. In other cases, where there’s unique language or undecipherable shorthand, having a column or field name isn’t enough to determine whether or not personal data exists.

The approach of using deep scanning to understand what’s in a given column or field results in two main downsides:

  1. Time inefficiencies: In cases where there’s a large volume of data to scan, you’ll see that queries and API calls take a long time to return that data. This is particularly true for systems, whether SaaS or internal, that have months or years of history saved.

    Consider, for example, a system storing a record of users’ actions on your website. Even if a web page receives only 2000 visitors a month, with an average of 3 links clicked, you’d be generating 6000 records purely from tracking clicks. Scanning a system that purely focuses on tracking user activity would involve querying hundreds of thousands of rows, which leads to longer processing times.
  2. Cost inefficiencies: Most cloud warehouse and database vendors, including those hosted on AWS, Azure, and GCP, use a consumption-based billing model, where you’re billed on “compute”, or, more colloquially, time spent processing data.

    So if you’re accessing an API of a 3rd party vendor, take Salesforce for example, you’ll quickly run through your allotted limit when attempting to extract all available data, which may result in an increased charge to raise those limits. Even worse, if you’re pulling data from an internal cloud database or data warehouse such as Snowflake, you’ll see direct billing increases.

    Naturally, the more data you’re working with, the more time it takes for the cloud server to process your request, leading to increased bills.

How Transcend optimizes data mapping

Transcend automates the process of finding sensitive and personal data, saving teams time and money with our approach. In fact, Transcend customers were able to use Structured Discovery to find over 8.01 million datapoints across their business systems in 2023, contributing to over 1.33 million hours saved from automating privacy operations workflows. That’s a lot of data to classify!

Transcend Structured Discovery reduces the load on your systems and accelerates the automated discovery and classification of personal and sensitive data across your internal systems and 3rd party SaaS tools in two key ways.

  • Schema Discovery: If you’re working with clear, explicit field names, you’ll be able to understand where personal or sensitive data lives with your schema information. Schema information can be extracted at a lower cost and with fewer resources, even when compared to data sampling.
  • Datapoint Classification: Transcend automates sampling and classifying data from each column—increasing the amount of available context and making a judgment on the presence of personal or sensitive data. Data sampling takes a representative portion of your available data, using it to confirm whether personal data exists within that field. Limiting the amount of data you’re pulling also limits the amount of time your systems spend processing data.

Users can further optimize compute efficiency by configuring when either option runs. We recommend following in the footsteps of most of our customers–starting with Schema Discovery and then enabling Datapoint Classification if the situation, such as increased volume of unstructured data, requires it.

One Transcend user working at a delivery app with over 20M users, shares his experience:

“30 days after purchasing Transcend we had data mapping + classification running on our 1500+ systems (homegrown and SaaS tools), had a full ROPA ready to go, and had automated our DSR flows. That's insane. Now we're implementing them for Consent and Do Not Sell management and are fully on board with Transcend for the long run.”

By using a combination of metadata from Schema Discovery and data sampling from Datapoint Classification, you’re able to dramatically reduce time spent finding personal data across your business systems and save money through reduced resource consumption on those systems. With all of these time savings, you’re able to deliver on other privacy projects to improve compliance, reduce risk, and support revenue-driving initiatives across other business units.

Reduce data mapping costs with Transcend

If you’re ready to learn how Transcend's next-gen privacy platform helps reduce risk, improve operational efficiency, and expand compliant data, reach out to our team! We’re looking forward to meeting you.


By Brandon Chen

Senior Product Marketing Manager

Share this article