Verify whether your data was synthesized correctly. |
|
The QA report is available for synthetic data jobs that are already completed. |
|
It will take 15 mins to read this guide. |
Navigating the QA report
The card at the top provides general information about the completed synthetic data job.
Job type |
shows if the job is of type Ad hoc or Catalog. |
Job started |
shows when the job started. |
Overall accuracy |
shows the overall accuracy of the generated synthetic data. |

The card below shows a QA report for each table in the synthetic dataset.

Accuracy tests
The accuracy percentage shows how accurately the synthetic data represents the original data.
Univariate |
The overall accuracy of the table’s univariate distributions. |
Bivariate |
The overall accuracy of the table’s bivariate distributions. |
Coherence |
(Only for linked tables) The temporal coherence of time series data between the original and synthetic data as well as the preservation of the average sequence length (or the average number of linked table records that are related to a subject table record). |
Privacy tests
MOSTLY AI offers empirical evidence that the privacy of the original data subjects has been preserved.
It performs three privacy tests to assert that the synthetic data can be close, but not too close to the original data in order to preserve the privacy of your data subjects:
Nearest neighbor distance ratio |
This is the ratio of the first and the fifth nearest-neighbor distances of synthetic data points when measured against the target dataset. It allows you to compare inliers and outliers in the population on an equal base. Synthetic data points with an NNDR close to 0 are near target points in sparse data regions, i.e., outlier target data points. The test passes if the synthetic data points are not more similar to outliers in the target, compared to the target data points. ![]() |
||
Identical match share |
A measure of exact matches between synthetic and original data. This metric counts the number of identical data points (copies) within the target data and compares it with the number of copies between the target dataset and the synthetic dataset. The test passes if the number of copies in the synthetic dataset is less (or not significantly more) than within the target data itself. ![]() Figure 1. Example target data with multiple identical data points
|
||
Distance to closest record |
A measure of the distances between synthetic records to their closest original records. For each synthetic data point, this metric looks at the closest data point in the target dataset and compares that distribution of the closest distances to the observed distribution within the target data. The test passes if, for the synthetic distribution of the closest records, low quantiles are not statistically below target data quantiles. A threshold is defined for each quantile by the confidence interval generated via bootstrapping on the difference between target data and synthetic data set distribution. |
Dataset statistics
Learn how big the training and synthetic tables are.
Context columns refers to the number of columns in the referenced table.
Model and Data QA report tabs
The Model QA report and the Data QA report tabs provde charts about the AI training model and the generated synthetic data, respectively.
Correlations |
This tab shows three correlation matrices. They provide an easy way to assess whether the synthetic dataset retained the correlation structure of the original data set. Both the X and Y-axis refer to the columns in your subject table, and each cell in the matrix correlates a variable pair: the more two variables are correlated, the darker the cell becomes. The third matrix shows the difference between the target and the synthetic data. |
||
Univariate distributions |
Univariate distributions describe the probability of a variable having a particular value. You can find four types of plots in this section of the QA report: categorical , continuous and datetime, but there’s also a Sequence Length plot if you synthesized a linked table. For each variable, there’s a distribution and binned plot. These show the distributions of the original and the synthetic dataset in green and black, respectively. The percentage next to the title shows how accurately the original column is represented by the synthetic column. You may find categories that are not present in the original dataset (for example,
|
||
Bivariate distributions |
Bivariate distributions help you understand the conditional relationship between the content of two columns in your original dataset and how it changed in the synthetic dataset. The bivariate distribution below shows, for instance, that the age group of forty years and older is most likely to be married, and anyone below thirty is most likely to have never been married. You can see that this is the same in the synthetic dataset. If it’s a QA report for a linked table, then you can find the plots with the context table’s columns by looking for You may find categories that are not present in the original dataset (for example, |
||
Accuracy |
The accuracy of synthetic data can be assessed by measuring statistical distances between the synthetic and the original data. The metric of choice for the statistical distance is the total variation distance (TVD), which is calculated for the discretized empirical distributions. Subtracting the TVD from 100% then yields the reported accuracy measure. These are being calculated for all univariate and all bivariate distributions. The latter is done for all pair-wise combinations within the target data, as well as between the context and the target. For sequential data, an additional coherence metric is calculated that assesses the bivariate accuracy between the value of a column, and the succeeding value of a column. All of these individual-level statistics are then averaged across to provide a single informative quantitative measure. The full list of calculated accuracies is provided as a separate downloadable file.
|
Remediating privacy and accuracy issues
Identifying the source of data quality or privacy issues can be very difficult.
Below is a list of common issues.
Accuracy
Bad univariate fit |
High number of N/As |
High amount of rare category labels |
Wrong encoding type |
Incorrect sequence length |
Too high batch size on linked table |
High number of business rules violations |
Training goal is set to Speed |
Privacy
-
High amount of
NaN
can make the Identical match share fail. -
Privacy tests can contain false positives because of sampling and stochastic tests.
-
In case of good accuracy, repeating a synthesization and testing privacy again makes sense.
Spotting potential issues
Numerical encoding of categorical values
For MOSTLY AI’s Auto-detect encoding type setting, values such as ZIP codes can be indistinguishable from continuous values. This results in the generation of invalid ZIP codes and difficult-to-learn business rules.
The solution is to change the encoding type to Categorical.
