Skip to main content

Target Data Models (TDMs)

Target data models define the structure of the output data in all Ingestro pipelines. They ensure that the mapped and transformed data follows the exact schema required by the destination system.

You can create, read (all/by ID), update, and delete target data models using:

Within a target data model, each column can be configured with a UI-facing name (label) and a technical name (key). You can also define validation rules, such as when a column must be mapped (mappingValidation) and when a value should display an error (dataValidations). In addition, you can specify the column type (columnType), which determines the applicable validations as well as the data type in the final output. Beyond standard types like string, float, and int, you can also define date or timestamp columns, as well as category or boolean columns. This ensures that the pipeline’s output has the correct structure and that all values are provided in the expected format.

For example, if you want the output to contain objects with the keys customer_name, domain_name, region, deal_size, address, and done, you would define the target data model as follows:

Example

[
{
label: "Customer Name",
key: "customer_name",
columnType: "string"
},
{
label: "Domain Name",
key: "domain_name",
columnType: "url"
},
{
label: "Region",
key: "region",
columnType: "string"
},
{
label: "Deal Size",
key: "deal_size",
columnType: "float"
},
{
label: "Address",
key: "address",
columnType: "string"
},
{
label: "Done",
key: "done",
columnType: "boolean"
}
]

How to create a TDM with the Ingestro pipeline API?

The Ingestro Pipeline API provides a convenient way to create a target data model (TDM) for data imports. First, authenticate using the Authentication API, then use the Target Data Nodel APIto create your TDMs.

After defining the general structure of your target data model, we recommend adding validation rules to ensure that user-submitted data matches the required schema and that values follow the expected formatting rules.

The column class includes several configurable properties. The following section lists all properties, including their default values, data types, and descriptions.

label (required)

Typestring
Descriptionlabel is one of two required properties of a column object. The value is displayed within the UI to the user, who goes through the importing workflow. This value and the key value is used for the column matching.

key (required)

Typestring
DescriptionThe value of key defines how your application calls a column. This value is not displayed to your users, but like the label, it is used for column matching.

alternativeMatches

Type[ string, ... ]
DescriptionIngestro utilizes machine learning with a logic-based component to provide an intelligent and advanced mapping functionality. To be able to force a more precise matching, we offer an additional matching layer with the alternativeMatches property. This should help match the target and imported columns more accurately because every item of the alternativeMatches array is considered for calculating the similarity between imported and target model columns. We recommend adding abbreviations and synonyms here.

mappingValidation

Type[ MappingValidation, ... ]
DescriptionWith the mapping validation option, you can define the conditions for when a column is required to be mapped. You can also add custom error messages to guide users on which columns need to be mapped and when. The mappingValidation object can contain two keys:
  • logic
  • errorMessage

logic

Typestring
DescriptionThis option enables you create complex conditional validation rules using logical operators and the mapped() helper function. See the validation rules below for details.

errorMessage

Typestring
DescriptionThis option lets you add a fully customized error message to a validation rule, which appears at the column level. It replaces the default message.

dataValidations

Type[ DataValidation, ... ]
DescriptionWith the data validations option, you can define the preferred format for the values in one column as well as whether the values are required or have to be unqiue. You can also add customized error messages to guide your users on how to correct their input. Each object in the dataValidations array can contain two keys:
  • logic
  • errorMessage

logic

Typestring
DescriptionWith this option, you can write complex, conditional validation rules with logical operators and our helper functions (unique, regex, includes, valueAt). This property offers you the possibility to ensure the highest-possible data quality (see Validation rules below for more details)

errorMessage

Typestring
DescriptionWith this option, you can add an fully customized error message to each validation rule. This notification text overwrittes the pre-built error message.

advancedValidations

Type[ AdvancedValidation, ... ]
DescriptionWith the advanced validations option, you can specify the endpoints for validating imported data. You have full control over which columns are sent for validation and which ones trigger the validation process. You can also add customized error messages to guide your users on how to correct their input. Each object in the advancedValidtions array can contain two keys:
  • url
  • method
  • headers
  • authentication
  • payloadConfiguration
  • triggerConfiguration

url

Typestring
DescriptionWith this option, you can define the endpoint that the data is sent to for validation.

method

Typestring
DescriptionDefine the REST API method that should be used when sending the data to the defined endpoint. You can set it to "POST", "PUT" or "PATCH".

headers

Typeobject
DescriptionWith this option, you can define the REST API headers of the request sent to the defined endpoint.

authentication

Typeobject
DescriptionWith this option, you can add the authentication endpoint the is used to retrieve the required authentication information that is sent with the request to the defined endpoint. The object contains the following fields: refresh_url, method, and headers.

payloadConfiguration

Typeobject
DescriptionWith this option, you can control which columns are sent to your endpoint and how the data is batched. Use the columns field to specify which columns to include in the request payload. If you leave this field empty, all columns will be sent to the endpoint. Additionally, you can define the batchSize (default 25.000 rows) that determines how many rows are sent in a single request. If your total number of rows exceeds the defined batch size, the system will automatically split the data and send multiple requests to your endpoint.

triggerConfiguration

Typeobject
DescriptionWith this option, you can define which columns trigger sending the data to the defined endpoint. With the columns field, you can specify which columns should trigger sending data to your endpoint. Add column names to the columns list to define these trigger columns. When a change occurs in any of these specified columns, the data will be sent to your endpoint. If you leave this list empty, the system will send data for every change in any row or column.
Note:

If a request to a set endpoint fails, the pipeline execution fails. If this happens during setup or while fixing an execution, the UI is blocked until the error is resolved.


advancedCleanings

Type[ AdvancedCleaning, ... ]
DescriptionWith the advanced cleanings option, you can specify the endpoints for cleaning imported data. You have full control over which columns are sent for cleaning and which ones trigger the cleaning process. Your users are then informed which values were changed through those cleanings. Each object in the advancedCleanings array can contain two keys:
  • url
  • method
  • headers
  • authentication
  • payloadConfiguration
  • triggerConfiguration

url

Typestring
DescriptionWith this option, you can define the endpoint that the data is sent to for cleaning.

method

Typestring
DescriptionDefine the REST API method that should be used when sending the data to the defined endpoint. You can set it to "POST", "PUT" or "PATCH".

headers

Typeobject
DescriptionWith this option, you can define the REST API headers of the request sent to the defined endpoint.

authentication

Typeobject
DescriptionWith this option, you can add the authentication endpoint the is used to retrieve the required authentication information that is sent with the request to the defined endpoint. The object contains the following fields: refresh_url, method, and headers.

payloadConfiguration

Typeobject
DescriptionWith this option, you can control which columns are sent to your endpoint and how the data is batched. Use the columns field to specify which columns to include in the request payload. If you leave this field empty, all columns will be sent to the endpoint. Additionally, you can define the batchSize (default 25.000 rows) that determines how many rows are sent in a single request. If your total number of rows exceeds the defined batch size, the system will automatically split the data and send multiple requests to your endpoint.

triggerConfiguration

Typeobject
DescriptionWith this option, you can define which columns trigger sending the data to the defined endpoint. With the columns field, you can specify which columns should trigger sending data to your endpoint. Add column names to the columns list to define these trigger columns. When a change occurs in any of these specified columns, the data will be sent to your endpoint. If you leave this list empty, the system will send data for every change in any row or column.
Note:

If a request to a set endpoint fails, the pipeline execution fails. If this happens during setup or while fixing an execution, the UI is blocked until the error is resolved.

Validation examples

Mapping Validation

Mapping validations help to ensure data integrity by letting you define when a column must be mapped.

You can define conditions using logical operators like AND (&&) and OR (||) to create complex validation rules with nested conditions. To create a validation rule, write an expression representing the valid case. If the logic returns false, the defined error message will be displayed.

Here are some examples of mapping validations with different levels of complexity and how to add them into your column definitions:

Column must be mapped

[
{
key: "customer_name",
label: "Customer Name",
columnType: "string",
mappingValidation: {
logic: "mapped('customer_name')",
errorMessage: "Customer Name needs to be mapped.",
},
},
];

At least one of two columns must be mapped

[
{
key: "company_name",
label: "Company Name",
columnType: "string",
mappingValidation: {
logic: "mapped('company_id')",
errorMessage: "Either Company Name or Company ID must be mapped.",
},
},
{
key: "company_id",
label: "Company ID",
columnType: "string",
mappingValidation: {
logic: "mapped('company_name') ",
errorMessage: "Either Company Name or Company ID must be mapped.",
},
},
];

Column must be mapped when another column is mapped, and another is unmapped

[
{
key: "employee_id",
label: "Employee ID",
columnType: "string",
mappingValidation: {
logic: "mapped('employee_id') || !mapped('department') || mapped('manager_id')",
errorMessage: "Employee ID needs to be mapped when Department is mapped and Manager ID is not mapped.",
},
},
{
key: "department",
label: "Department",
columnType: "string",
},
{
key: "manager_id",
label: "Manager ID",
columnType: "string",
},
];

Either-or mapping requirement with multiple columns

[
{
key: "first_name",
label: "First Name",
columnType: "string",
mappingValidation: {
logic: "mapped('first_name') || mapped('full_name')",
errorMessage: "Either First Name & Last Name together, or Full Name must be mapped.",
},
},
{
key: "last_name",
label: "Last Name",
columnType: "string",
mappingValidation: {
logic: "mapped('last_name') || mapped('full_name')",
errorMessage: "Either First Name & Last Name together, or Full Name must be mapped.",
},
},
{
key: "full_name",
label: "Full Name",
columnType: "string",
mappingValidation: {
logic: "mapped('first_name') || mapped('last_name') || mapped('full_name')",
errorMessage: "Either First Name & Last Name together, or Full Name must be mapped.",
},
},
];

One of multiple columns required

[
{
key: "company_id",
label: "Company ID",
columnType: "string",
mappingValidation: {
logic: "mapped('company_id') || mapped('customer_id') || mapped('employee_id')",
errorMessage: "Either Company ID, Customer ID, or Employee ID must be mapped.",
},
},
{
key: "customer_id",
label: "Customer ID",
columnType: "string",
mappingValidation: {
logic: "mapped('company_id') || mapped('customer_id') || mapped('employee_id')",
errorMessage: "Either Company ID, Customer ID, or Employee ID must be mapped.",
},
},
{
key: "employee_id",
label: "Employee ID",
columnType: "string",
mappingValidation: {
logic: "mapped('company_id') || mapped('customer_id') || mapped('employee_id')",
errorMessage: "Either Company ID, Customer ID, or Employee ID must be mapped.",
},
},
];

Data Validations

Data validations are a powerful tool for ensuring data integrity within your columns. They allow you to specify criteria that a value must meet to be considered valid, and you can add multiple validations to each column.

You can define conditions using logical and comparison operators to create complex validation rules with nested conditions. To create a validation rule, write an expression representing the valid case. If the logic returns false, the defined error message will be displayed.

Available operators:

  • Logical: AND (&&), OR (||)
  • Comparison: EQUAL (==), NOT EQUAL (!=), GREATER THAN (>), LESS THAN (<), GREATER THAN OR EQUAL (>=), LESS THAN OR EQUAL (<=)
info

Note: === and !== are not supported.

Referencing column values:

  • Use row.<column_key> to access the value of a column in the current row (e.g., row.customer_name, row.deal_size)

Helper functions:

We also provide helper functions that you can use in conjunction with your validation expressions to create more sophisticated rules:

  • unique(['<column_key>', ...]) - Ensures uniqueness across one or multiple columns
  • regex('<column_key>', { expression: '' }) - Validates values against a regular expression pattern
  • includes(['<value>', ...], '<column_key>') - Checks if a column value is in a list of allowed values
  • isEmpty('<column_key>') - Checks if a column value is empty
  • valueAt('<column_key>', <row_index>) - Accesses the value of a column at a specific row index
  • contains('<column_key>', <search_string>) - Checks if a column value contains a specific substring
info

Empty cells are treated as null. Therefore, we recommend using isEmpty() for empty checks instead of != "".

Here are some examples of data validations with different levels of complexity and how to add them into your column definitions:

Column's values are required / not allowed to be empty

[
{
key: "column_a",
label: "Column A",
columnType: "string",
dataValidations: [
{
logic: "!isEmpty('column_a')",
errorMessage: "This value is required.",
},
],
},
];

Column's values must be unique

[
{
key: "column_a",
label: "Column A",
columnType: "string",
dataValidations: [
{
logic: "unique(['column_a'])",
errorMessage: "This value must be unique.",
},
],
},
];

Column's value is required if another column has a value

[
{
key: "column_a",
label: "Column A",
columnType: "string",
dataValidations: [
{
logic: "!isEmpty('column_a') || isEmpty('column_b')",
errorMessage: "This value is required when Column B has a value.",
},
],
},
{
key: "column_b",
label: "Column B",
columnType: "string",
},
];

Column's value is required if another column is empty

[
{
key: "column_a",
label: "Column A",
columnType: "string",
dataValidations: [
{
logic: "!isEmpty('column_a') || !isEmpty('column_b')",
errorMessage: "This value is required when Column B is empty.",
},
],
},
{
key: "column_b",
label: "Column B",
},
];

Column's value must be a number between X and Y

[
{
key: "column_a",
label: "Column A",
columnType: "int",
dataValidations: [
{
logic: "!isEmpty('column_a') && (row.column_a >= 10 && row.column_a <= 100)",
errorMessage: "This value must be a number between 10 and 100.",
},
],
},
];

Column's value has a character limit of X

[
{
key: "column_a",
label: "Column A",
columnType: "string",
dataValidations: [
{
logic: "regex('column_a', { expression: '^.{0,50}$' })",
errorMessage: "This value must not exceed 50 characters.",
},
],
},
];

One of multiple columns must have a value / is not allowed to be empty

[
{
key: "customer_id",
label: "Customer ID",
columnType: "string",
dataValidations: [
{
logic: "!isEmpty('customer_id') || !isEmpty('company_id') || !isEmpty('employee_id')",
errorMessage: "At least one of Customer ID, Company ID, or Employee ID must be provided."
}
]
},
{
key: "company_id",
label: "Company ID",
columnType: "string",
dataValidations: [
{
logic: "!isEmpty('customer_id') || !isEmpty('company_id') || !isEmpty('employee_id')",
errorMessage: "At least one of Customer ID, Company ID, or Employee ID must be provided."
}
]
},
{
key: "employee_id",
label: "Employee ID",
columnType: "string",
dataValidations: [
{
logic: "!isEmpty('customer_id') || !isEmpty('company_id') || !isEmpty('employee_id')",
errorMessage: "At least one of Customer ID, Company ID, or Employee ID must be provided."
}
]
}
]

All entries across multiple columns must be unique

[
{
key: "column_a",
label: "Column A",
columnType: "string",
dataValidations: [
{
logic: "unique(['column_a', 'column_b', 'column_c'])",
errorMessage: "The combination of Column A, Column B, and Column C must be unique."
}
]
},
{
key: "column_b",
label: "Column B",
columnType: "string",
dataValidations: [
{
logic: "unique(['column_a', 'column_b', 'column_c'])",
errorMessage: "The combination of Column A, Column B, and Column C must be unique."
}
]
},
{
key: "column_c",
label: "Column C",
columnType: "string",
dataValidations: [
{
logic: "unique(['column_a', 'column_b', 'column_c'])",
errorMessage: "The combination of Column A, Column B, and Column C must be unique."
}
]
}
]

Column's value is required if another column's value is a specific value from a list

[
{
key: "column_a",
label: "Column A",
columnType: "string",
dataValidations: [
{
logic: "!(includes(['Germany', 'Canada'], 'column_b') || !isEmpty('column_a'))",
errorMessage: "Column A is required when Column B is Germany or Canada.",
},
],
},
{
key: "column_b",
label: "Column B",
columnType: "string",
},
];

Column's value is required when the value of the row before is empty

[
{
key: "column_a",
label: "Column A",
columnType: "string",
dataValidations: [
{
logic: "!isEmpty(valueAt('column_a', index - 1)) || !isEmpty('column_a')",
errorMessage: "Column A is required when the value of the row before is empty.",
},
],
},
];

Column's value is required if another column contains a specific substring

[
{
key: "column_a",
label: "Column A",
columnType: "string",
dataValidations: [
{
logic: "!contains('column_b', 'test') || !isEmpty('column_a')",
errorMessage: "Column A is required when Column B contains 'test'.",
},
],
},
{
key: "column_b",
label: "Column B",
columnType: "string",
},
];

Advanced Validations

Advanced validations are a powerful way to ensure data integrity with your system. They allow you to define specific endpoints for validating imported data, giving you control over which columns are sent for validation and which ones trigger the process. To set up an advanced validation rule, simply specify the endpoint, method, and headers. If the validation logic returns false, the designated error message will be displayed.

Example

advancedValidations: [
{
url: "https://your-endpoint.com",
method: "POST",
headers: {},
authentication:{
refresh_url: "https://your-authentication-endpoint.com",
method: "POST",
headers: {}
},
payloadConfiguration:{
columns: ['company_name', 'company_id'],
batchSize: 10000
}
triggerConfiguration:{
columns: ['company_id', 'company_name']
}
}
]

Payload

[
{
data: {
company_id: "13132",
company_name: "company A"
},
index: 1
}
{
data: {
company_id: "879632",
company_name: "company B"
},
index: 2
}
]
info

Note: The index field in the payload starts at 1 (not 0) and represents the row number in the dataset.

Response

[
{
errors: {
company_id: "Key should be unique",
company_name: null
},
index: 1
}
{
errors: {
company_id: "Key should be unique",
company_name: null
},
index: 2
}
]

Advanced Cleanings

Advanced cleanings are a powerful way to ensure data integrity with your system. They allow you to define specific endpoints for cleaning imported data, giving you control over which columns are sent for cleaning and which ones trigger the process. To set up an advanced cleaning, simply specify the endpoint, method, and headers.

Example

advancedCleanings: [
{
url: "https://your-endpoint.com",
method: "POST",
headers: {},
authentication:{
refresh_url: "https://your-authentication-endpoint.com",
method: "POST",
headers: {}
},
payloadConfiguration:{
columns: ['company_name', 'company_id'],
batchSize: 10000
}
triggerConfiguration:{
columns: ['company_id', 'company_name']
}
}
]

Payload

[
{
data: {
company_id: "13132",
company_name: "company A"
},
index: 1
}
{
data: {
company_id: "879632",
company_name: "company B"
},
index: 2
}
]
info

Note: The index field in the payload starts at 1 (not 0) and represents the row number in the dataset.

Response

[
{
data: {
company_id: "12345",
company_name: null,
},
index: 1,
action_type: "UPDATE",
},
{
data: {},
index: 2,
action_type: "DELETE",
},
{
data: {
company_id: "5678",
company_name: "new company",
},
index: 3,
action_type: "CREATE",
},
];

The action_type field for each row enables powerful data manipulation capabilities:

  • UPDATE: Modify existing rows with new values
  • DELETE: Remove unwanted rows from the dataset
  • CREATE: Add new rows to the end of the dataset

columnType

Type"int", "float", "string", and more
Optional
DescriptionThis option allows you to define the type of the column. You can either choose if the column should contain values which are an int, a float, a string or many more.
info

You can find a full list of column types with pre-build data validation rules in our column types documentation.


optionConfiguration

Typeobject
Optional but required ifcolumnType: "category", "boolean", "country_code_alpha_2", "country_code_alpha_3", "currency_code"
DescriptionThe optionConfiguration property enables intelligent option mapping for dropdown-based column types. It defines the available options, controls how imported values are automatically mapped to these options, and configures the behavior of the dropdown interface. You can define if the dropdown options should always be static or if they should be fetched from an endpoint everytime before the import.

The optionConfiguration object supports the following column types:

  • category - Custom dropdown options you define
  • boolean - Yes/No options (automatically configured)
  • country_code_alpha_2 - Two-letter country codes (e.g., "US", "DE")
  • country_code_alpha_3 - Three-letter country codes (e.g., "USA", "DEU")
  • currency_code - Currency codes (e.g., "USD", "EUR")

How Option Mapping Works

When users import data, the system automatically maps their input values to your predefined options using our matching algorithm. You can modify the layers and threshold that our matching algorihm should use. When all layers are enabled they are executed in the following order:

  1. Exact matching - Direct matches between input and option labels
  2. Smart matching - AI-powered matching that understands context and synonyms using state-of-art LLMs
  3. Fuzzy matching - Similarity-based matching for typos and variations

options

Type[ Option, ... ]
Required ifcolumnType: "category" & optionSource: "STATIC"
DescriptionAn array of available options for the dropdown. Each option defines what users can select and how imported values map to it. For boolean, country_code_alpha_2, country_code_alpha_3, and currency_code column types, options are automatically populated by the system. For category columns, you must define the options yourself.

Each option object contains the following properties:

label

Typestring
Required
DescriptionThe display name shown to users in the dropdown interface during data transformation and review step. This is what users see when mapping their imported values to your options.

value

Typestring, number, or boolean
Required
DescriptionThe actual value stored in the final output data. This is what your application receives, not the label. For example, a boolean option might have display "Yes" returns true to your target system.
info

Note: value is not used for matching the imported value on the options. Only the label, and each value in alternativeMatches is used for the mapping. If you want that the mapping module to consider the value as well, add it to alternativeMatches.

type

Type"STRING" | "NUMBER" | "BOOLEAN"
Required
DescriptionThe data type of the value. Use "STRING" for text values, "NUMBER" for numeric values, and "BOOLEAN" for true/false values. This must match the data type of the value property.

alternativeMatches

Type[ string, ... ]
Optional
DescriptionAn array of alternative text values that should automatically map to this option during import. This is particularly useful for handling variations, abbreviations, and common synonyms. The matching system uses these alternatives to improve mapping accuracy.

description

Typestring
Optional
DescriptionAn optional description providing additional context about the option. This can help users understand when to use this option.

mappingConfiguration

Typeobject
Optional
DescriptionControls how the system automatically maps imported values to your predefined options. You can configure which matching strategies to use and how strict the matching should be.

layers

Type[ "EXACT" | "SMART" | "FUZZY"]
Optional
Default["EXACT", "SMART", "FUZZY"]
DescriptionDefines the mapping layers that are applied to find matches between imported value and defined options.

Available layers:

  • EXACT - Matches only when the input exactly matches an option label or alternative match
    • Returns 1 for exact matches
    • 0.9999 for exact matches that are only different from upper case lower case
    • 0.9998 for exact matches that differ with at least one special char or space
    • 0.9997 for exact matches that differ from upper case and lower case plus at least one special char or space
  • SMART - Uses AI to understand context, synonyms, and semantic meaning for intelligent matching
  • FUZZY - Matches based on text similarity, helpful for typos and slight variations
info

We recommend using all three layers ["EXACT", "SMART", "FUZZY"] for the best user experience. This provides accurate matching while being forgiving of variations and typos.

threshold

Typenumber (greater than 0.0 and up to 1.0)
Optional
Default0.6
DescriptionThis threshold defines the minimum confidence required for a match to count as valid. Lower values make the matching more flexible, while higher values apply stricter matching. The default is 60%.

multiSelect

Typeboolean
Optional
Defaultfalse
DescriptionWhen true, allows users to select multiple options for a single cell. When false, only one option can be selected per cell. boolean columns, do not support multiple selections.
info

Output format:

  • When multiSelect: true, the output is an array of values: ["value1", "value2"]
  • When multiSelect: false, the output is a single value: "value1" or true

optionSource

Type"STATIC" | "DYNAMIC"
Optional
Default"STATIC"
DescriptionDefines where the options come from. Use "STATIC" for predefined options that don't change, or "DYNAMIC" to fetch options from an external API endpoint.
  • STATIC - Options are defined in the options array and remain constant
  • DYNAMIC - Options are fetched from an external API using the configuration in dynamicOptionFetch

dynamicOptionFetch

Typeobject
Required whenoptionSource: "DYNAMIC"
DescriptionConfiguration for fetching options from an external API. This allows you to keep dropdown options synchronized with your system's current data. Options are fetched when the TDM is loaded and can be refreshed as needed.

url

Typestring
Required
DescriptionThe API endpoint URL to fetch options from.

method

Type"GET" | "POST" | "PUT" | "PATCH"
Required
DescriptionThe HTTP method to use when fetching options.

headers

Typeobject
Optional
DescriptionHTTP headers to include in the request. Use this for API keys, content types, or other required headers.

authentication

Typeobject
Optional
DescriptionConfiguration for authenticating with the options API. This is used when your endpoint requires authentication tokens that need to be refreshed.

The authentication object contains:

  • refresh_url - The endpoint to retrieve authentication tokens
  • method - The HTTP method for the authentication request
  • headers - Headers to include in the authentication request

Examples

Category column with custom options

[
{
key: "product_category",
label: "Product Category",
columnType: "category",
optionConfiguration: {
options: [
{
label: "Electronics",
value: "electronics",
type: "STRING",
alternativeMatches: ["electronic", "tech", "technology"],
description: "Electronic devices and accessories",
},
{
label: "Clothing",
value: "clothing",
type: "STRING",
alternativeMatches: ["clothes", "apparel", "fashion"],
description: "Clothing and fashion items",
},
{
label: "Home & Garden",
value: "home_garden",
type: "STRING",
alternativeMatches: ["home", "garden", "household"],
description: "Home improvement and garden supplies",
},
],
mappingConfiguration: {
layers: ["EXACT", "SMART", "FUZZY"],
threshold: 0.6,
},
multiSelect: false,
optionSource: "STATIC",
},
},
];

Category column with multiple selection enabled

[
{
key: "skills",
label: "Skills",
columnType: "category",
optionConfiguration: {
options: [
{
label: "JavaScript",
value: "javascript",
type: "STRING",
alternativeMatches: ["js", "node", "nodejs"],
},
{
label: "Python",
value: "python",
type: "STRING",
alternativeMatches: ["py"],
},
{
label: "React",
value: "react",
type: "STRING",
alternativeMatches: ["reactjs", "react.js"],
},
],
mappingConfiguration: {
layers: ["EXACT", "SMART", "FUZZY"],
threshold: 0.6,
},
multiSelect: true,
optionSource: "STATIC",
},
},
];
info

When multiSelect: true, the output will be an array of values like ["javascript", "react"] instead of a single value.

Category column with dynamic options from API

[
{
key: "department",
label: "Department",
columnType: "category",
optionConfiguration: {
options: [],
mappingConfiguration: {
layers: ["EXACT", "SMART", "FUZZY"],
threshold: 0.6,
},
multiSelect: false,
optionSource: "DYNAMIC",
dynamicOptionFetch: {
url: "https://api.yourcompany.com/departments",
method: "GET",
headers: {
"Content-Type": "application/json",
"X-API-Key": "your-api-key",
},
authentication: {
refresh_url: "https://api.yourcompany.com/auth/token",
method: "POST",
headers: {
"Content-Type": "application/json",
},
},
},
},
},
];
info

When using optionSource: "DYNAMIC", the options array will be populated automatically from your API. Your API should return an array of option objects with label, value, type, and optionally alternativeMatches and description. The fetch is executed prior to every import. When the fetch fails, the execution fails.

Country code column (automatically configured)

[
{
key: "country",
label: "Country",
columnType: "country_code_alpha_2",
optionConfiguration: {
mappingConfiguration: {
layers: ["EXACT", "SMART", "FUZZY"],
threshold: 0.6,
},
multiSelect: false,
optionSource: "STATIC",
},
},
];
info

For country_code_alpha_2, country_code_alpha_3, and currency_code column types, the options are automatically populated by the system with all available codes. You don't need to define them manually.

Boolean column with option mapping

[
{
key: "is_active",
label: "Is Active",
columnType: "boolean",
optionConfiguration: {
mappingConfiguration: {
layers: ["EXACT", "SMART", "FUZZY"],
threshold: 0.6,
},
multiSelect: false,
optionSource: "STATIC",
},
},
];
info

For boolean columns, the options are automatically configured by the system. You don't need to define them manually unless you want to customize the alternative matches.


dateTimeFormat

Typestring
Optional but required ifcolumnType: "date"
DescriptionWith this key, you can support all your preferred date and timestamp formats such as MM/DD/YYYY, DD.MM.YYYY, YYYY-MM-DD, etc.

Use the following date variables to create your desired format:

TypeSyntaxOutput
MonthM1, 2, 3, ..., 12
MonthMo1st, 2nd, 3rd, ..., 12th
MonthMM01, 02, 03, ..., 12
MonthMMMJan, Feb, Mar, ..., Dec
MonthMMMMJanuary, February, March, ..., December
DayD1, 2, 3, ..., 31
DayDo1st, 2nd, 3rd, ..., 31st
DayDD01, 02, 03, ..., 31
DayDDD1, 2, 3, ..., 365
DayDDDD001, 002, ..., 365
YearY1970, 1971, 1972, ..., +10000
YearYY70, 71, 72, ..., 27
YearYYYY1970, 1971, 1972, ...., 2027
YearYYYYYY-001970, -001971, -001972, ..., +001907
HourH0, 1, 2, ..., 23
HourHH00, 01, 02, ..., 23
Hourh1, 2, 3, ..., 12
Hourhh01, 02, 03, ..., 12
Hourk1, 2, 3, ..., 24
Hourkk01, 02, 03, ..., 24
Minutem0, 1, 2, ..., 59
Minutemm00, 01, 02, ..., 59
Seconds0, 1, 2, ..., 59
Secondss00, 01, 02, ..., 59
Time zoneZ-07:00, -06:00, -05:00, ..., +07:00
Time zoneZZ-0700, -0600, -0500, ..., +0700
Unix timestampX855599530642
AM/PMAAM, PM
AM/PMaam, pm
QuarterQ1, 2, 3, 4
QuarterQo1st, 2nd, 3rd, 4th

info

This table is based on the original table from the open source library Moment.js. You can find the original table and its documentation here. Please note that the table has been adjusted. You can use all variables given in the original table apart from the Day of Week ones.


In the following, you can find an example of how to implement a date column with the format MM/DD/YYYY and a timestamp column with the format YYYY-MM-DDTHH:mm:ss:

Example

[
{
label: "Date",
key: "date",
columnType: "date",
dateTimeFormat: "MM/DD/YYYY",
},
{
label: "Timestamp",
key: "timestamp",
columnType: "date",
dateTimeFormat: "YYYY-MM-DDTHH:mm:ss",
},
]

numberFormat

Type"eu" | "us"
Optional but required ifcolumnType: "int", "float", "currency_eur", "currency_usd", "percentage"
Default"eu"
DescriptionIt affects how the numbers will be displayed at the review step. If the value is "eu", then a comma will used as a decimal delimiter, and dots will be used as thousands delimiters. If the value is "us", then a dot will used as a decimal delimiter, and commas will be used as thousands delimiters.