Seeded generation

Seeded generation

With seeded generation, you can create a new synthetic dataset that is conditioned on a seed dataset you provide. This is also known as conditional generation.

The seed dataset acts as context for the generation of the new synthetic dataset. With a seed dataset, you can define as context the columns, values, and their distributions that you want to use to conditionally generate the rest of the columns in the synthetic dataset.

The new synthetic dataset will still be statistically representative, but within the context of the seed dataset. The privacy of the new synthetic dataset then depends on the privacy of the provided seed.

A synthetic dataset generated with seed is partially synthetic.

  • The columns in the seed dataset that you provide as context remain the same in the generated synthetic dataset.
  • The rest of the columns (non-seed) are generated synthetically and are conditioned on the values in the seed dataset.
  • If you use Personally Identifiable Information (PII) in the seed dataset, that information will be available "as-is" in the generated synthetic dataset. In such cases, treat the synthetic data with the same care you apply for personal data.
  • The seed dataset is deleted immediately after the synthetic dataset is generated.

Generate a synthetic dataset with seed

For seeded generation, start a new synthetic dataset and provide a seed dataset.


  • A trained generator.
  • A seed dataset with the characteristics listed below.
    • The seed column names must match those in the table used for generator training.
    • Seed categorical columns must contain the same categories as in the dataset used for training. For example, Female and Male categories in a sex column must match exactly the ones used for training.
    • Bear in mind that conditionally-generated numeric and datetime values are kept within the range of the original data. For example, if the column age ranges between 18 and 80 in the original data, age 15 in a seed dataset will be clipped to 18 in the conditionally-generated data.

If you use the web application, you can upload a seed dataset from the Sample size options after you start a new synthetic dataset.

  1. Start a new synthetic dataset.
  2. On the Configure Synthetic Dataset page, expand the subject table.
  3. Under Sample size, select seed. Seeded generation - Sample size - Select seed
  4. Click Upload seed file. Seeded generation - Sample size - Click Upload file
  5. (Optional) Adjust the rest of the generation settings.
  6. Click Start generation in the upper right.


The two examples below showcase what you can achieve with seeded generation.


The examples below are also available as a Jupyter Notebook (opens in a new tab). You can also find it in our Tutorials section.

  1. Perform multi-column dataset rebalancing. You create a seed dataset with two columns that have a new distribution. You then use the seed for conditional generation and see how the rest of the columns are impacted by the rebalancing.
  2. Generate partially synhetic geo data. Use the original geographical coordinates from the original training dataset to generate the remaining columns as synthetic data.

Rebalance datasets with seeded generation

In this example, you learn how to rebalance the US Census dataset with seed generation, where the seed dataset contains an equal distribution of females and males and an uncorrelated income attribute. The goal is to remove the gender gap and see how the rest of the attributes in the original dataset change in the generated synthetic dataset.

The information below provides a sequence of steps with an explanation of each step. You can use the code snippets in a Jupyter Notebook and adjust them as you see fit.


To reproduce this example, you need the following Python packages.

To install in a Jupyter Notebook environment, use the following:

!pip install mostlyai pandas numpy matplotlib


  1. Import the Python packages needed for this example.
    from mostlyai import MostlyAI
    import pandas as pd
    import numpy as np
  2. Get an API key and instantiate the Python client. For details, see here.
    mostly = MostlyAI(api_key="INSERT_API_KEY")
  3. Train a generator on the US Adult dataset or get an existing one.
    • Train a new generator.
      df = pd.read_csv('')
      g = mostly.train(data=df, name="Tutorial Seeded Generation - US Census")
    • Get an existing generator trained on the US Adult dataset.
      g = mostly.generators.get("INSERT_GENERATOR_ID")
  4. Create a seed dataset.
    Create the seed dataset with a Pandas DataFrame. Use NumPy (opens in a new tab) to randomly generate the seed values.
    For the sex attribute, you create a 50/50 split between Male and Female.
    For the income attribute, you keep the share of low- and high-income earners constant. However, you can see how you can randomize between men and women, effectively removing the gender income gap.
    n = 48_842 # generate the same number of records as in the original dataset
    p_inc = (df.income=='>50K').mean() # get the probability of the high-income category
    seed = pd.DataFrame({
        'sex': np.random.choice(['Male', 'Female'], n, p=[.5, .5]), # generate 50/50 split between Females and Males
        'income': np.random.choice(['<=50K', '>50K'], n, p=[1-p_inc, p_inc]), # generate income categories based on the probabilities of the original dataset
  5. Generate a rebalanced dataset with the seed dataset as context.
    sd = mostly.generate(generator=g, seed=seed)
  6. Use Matplotlib (opens in a new tab) to compare the age distribution of records from the original vs rebalanced dataset.
    import matplotlib.pyplot as plt
    plt.xlim(10, 95)
    plt.title('Female Age Distribution')
    df['Female'].age.plot.kde(color='black', bw_method=0.2)
    syn['Female'].age.plot.kde(color='#24db96', bw_method=0.2)
    plt.legend({'original': 'black', 'synthetic': '#24db96'})

To meet the criteria of removing the gender income gap, the generated synthetic records for women are now significantly older as shown in the chart.

Seeded generation - Example 01 - Chart - Rebalance dataset

You can explore other shifts in the distributions that are generated as a consequence of the provided seed data.

Generate partially synthetic geographical data

For this use case, you will be using 2019 AirBnB listings for Manhattan (opens in a new tab). The dataset consists of 48,895 records, and 10 mixed-type columns, with two of those representing the latitude and longitude of the listing. You will use this dataset to create synthetic attributes for all the actual locations, that were contained in the original.


To reproduce this example, you need the following Python packages.

To install in a Jupyter Notebook environment, use the following:

!pip install mostlyai pandas matplotlib


  1. Import the Python packages needed for this example.
    from mostlyai import MostlyAI
    import pandas as pd
    import matplotlib.pyplot as plt
  2. Read the original dataset into a Pandas DataFrame object.
    # fetch original data
    df_orig = pd.read_csv('')
  3. Pre-process the original data.
    MOSTLY AI expect latitude and longitude in a single table column. Because of this, you need to concatenate the latitude and longitude in a single column.
    In this example, you do not create artifical seed data. Instead, you use real data: the concatenated LAT_LONG variable and the neighbourhood variable as a seed dataframe.
    df = df_orig.copy()
    # concatenate latitude and longitude to "LAT, LONG" format
    df['LAT_LONG'] = df['latitude'].astype(str) + ', ' + df['longitude'].astype(str)
    df = df.drop(columns=['latitude', 'longitude'])
    # define list of columns, on which we want to condition on
    seed_cols = ['neighbourhood', 'LAT_LONG']
    # create dataframe that will be used as seed
    df_seed = df[seed_cols]
  4. Train a generator with the pre-processed AirBnB dataset.
    # Train a generator on the pre-processed AirBnB data
    config = {
        'name': 'Conditional Generation Tutorial AirBnB',
        'tables': [{
            'name': 'AirBnB',
            'data': df,
            'modelConfiguration': {'maxTrainingTime': 2},
            'columns': [
                {'name': 'neighbourhood_group', 'included': True, 'modelEncodingType': 'CATEGORICAL'},
                {'name': 'neighbourhood', 'included': True, 'modelEncodingType': 'CATEGORICAL'},
                {'name': 'room_type', 'included': True, 'modelEncodingType': 'CATEGORICAL'},
                {'name': 'price', 'included': True, 'modelEncodingType': 'NUMERIC_AUTO'},
                {'name': 'minimum_nights', 'included': True, 'modelEncodingType': 'NUMERIC_AUTO'},
                {'name': 'number_of_reviews', 'included': True, 'modelEncodingType': 'NUMERIC_AUTO'},
                {'name': 'last_review', 'included': True, 'modelEncodingType': 'DATETIME'},
                {'name': 'reviews_per_month', 'included': True, 'modelEncodingType': 'NUMERIC_AUTO'},
                {'name': 'availability_365', 'included': True, 'modelEncodingType': 'NUMERIC_AUTO'},
                {'name': 'LAT_LONG', 'included': True, 'modelEncodingType': 'LAT_LONG'}
    g_airbnb = mostly.train(config=config)
  5. Generate a partially synthetic dataset with the seed.
    # generate a synthetic dataset with a seed
    sd = mostly.generate(generator=g_airbnb, seed=df_seed)
    # start using it
    syn_partial =
    print(f"Created synthetic data with {syn_partial.shape[0]:,} records and {syn_partial.shape[1]:,} attributes")
  6. Compare the partial synthetic data to the original data.
    %%capture --no-display
    def plot_manhattan(df, title):
        ax = df_orig.plot.scatter(
            color=np.log(df.price.clip(lower=50, upper=2_000)),
    plot_manhattan(df_orig, 'Original Data')
    plot_manhattan(syn_partial, 'Partially Synthetic Data')
    MOSTLY AI - Usage and credits - Generators usage

What's next

With seeded generation, you can probe the generative model with a specific context, whether that is hypothetical (example 1) or real (example 2), and gain corresponding insights for specific scenarios.

In addition, you can:

  • use a different set of fixed columns for the US Census dataset
  • generate a very large number of records for a fixed value set, e.g. create 1 million records of 48 year old female Professors
  • perform a fully synthetic dataset of the AirBnB Manhattan dataset