Where are you using WebAssembly?
Wasm promises to let developers build once and run anywhere. Are you using it yet?
At work, for production apps
At work, but not for production apps
I don’t use WebAssembly but expect to when the technology matures
I have no plans to use WebAssembly
No plans and I get mad whenever I see the buzzword

Pre-Labeled Datasets Are Key to Building High-Quality ML Models

How do you get the data you need to build the models while also respecting data privacy? The answer is pre-labeled data sets.
Dec 1st, 2022 7:00am by
Featued image for: Pre-Labeled Datasets Are Key to Building High-Quality ML Models
Image via Pixabay.

As artificial intelligence scales, data privacy becomes even more important in developing machine learning models. That said, organizations must also be able to aggregate data to create great AI. So the biggest question remains: How do you get the data you need to build the models while also respecting data privacy? 

Personally Identifiable Information and Data Labeling

Personally identifiable information (PII) is any data that can potentially be used to identify someone, such as their name and phone number or their email address and social security number. With so much of this information being collected every time you use the internet, companies must be able to secure it from bad actors while still gathering insights from the data to scale their AI models.

The key to striking the right balance between secure and scalable data is leveraging pre-labeled datasets. Data providers tend to have a wide selection of pre-labeled datasets that can be used to train ML models. They often use specific opt-in methodologies to collect data, which data specialists then clean and annotate, ensuring that it is compliant and scalable for immediate use.

Labeling datasets involves identifying raw data and adding one or more labels to that data which specifies its context for machine learning models. Labels allow data scientists to isolate different variables and identify the best predictors for training ML models. Training ML models is very much a “put garbage in, get garbage out” scenario. If you want a high-quality model, then it’s critical that your training data is high quality, diverse and accurate.

Leveraging Pre-Labeled Datasets

In order to have good ML, data scientists need to collect a lot of data and continue collecting data to optimize their ML models. The more data you can feed your models, the better they will be. However, with this massive collection of data comes the need to protect privacy. Pre-labeled datasets have data that is accurate, diverse and ready to be integrated without the personal information attached to it. 

Take MediaInterface, a software company that provides a language technology solution to primarily health-care-related institutions in Germany and other parts of Europe, as an example. The company used pre-labeled datasets to fill in major translation gaps in its service offerings. MediaInterface needed to create a robust French vocabulary, which means it needed to acquire data such as French names and places often referenced in patient health information.

The team worked with a data provider to bridge the data gaps while meeting the European General Data Protection Regulation’s strict data privacy requirements. Plugging in robust pre-labeled datasets helped to fill in the vocabulary gaps and, ultimately, remove the need to collect and clean new data.  

De-Identification Using Pre-Labeled Data

Labeled data can be used to train ML models to detect and subsequently remove PII from unlabeled datasets. Pre-labeled datasets can remove PII and put the data in context so that organizations can still leverage insights without infringing on privacy rights. For example, AI in health care requires data to be HIPAA compliant, which makes aggregating and annotating data an incredibly difficult and time-consuming process. Another strategy is data masking by obscuring the true meaning of the data by acting on the rules you provide.

At Appen, we’ve found that the solution to efficiently cleaning PII from data is to train ML models to recognize and remove it. Companies working with personally identifiable information or protected health information can use machine learning in the data annotation process to meet data security requirements, save time and accelerate the ROI of their AI initiatives.

Pre-labeling data provides an initial “best estimate” hypothesis for PII before the team starts on the task. These datasets can then be used to train ML models to remove personally identifiable information prior to the start of the data annotation process. For example, pixel masks applied to AV images can automatically detect license plate numbers and street names with high accuracy, allowing data scientists to quickly blur or remove the information.

With increased data privacy regulation coming swiftly, it’s important that businesses are in compliance. By leveraging pre-labeled datasets, organizations can still have access to high-quality data and insights while respecting data privacy mandates. 

Group Created with Sketch.
THE NEW STACK UPDATE A newsletter digest of the week’s most important stories & analyses.