This article describes how to import CSV files as a data source, allowing offline data to become enriched and actionable within the Customer Data Hub. To be successfully imported, the CSV files must be formatted correctly. For information on the file format, see Preparing a CSV File for Import.

In this article:


  • Tealium AudienceStream CDP

  • Tealium EventStream API Hub

How It Works

The File Import feature provides the ability to import a CSV file to supplement online visitor profiles with valuable offline data. Using this feature, you can import a CSV file using a file transfer service, such as an Amazon S3 Bucket. The Tealium server connects to the file service, reads the file, and ingests the data. Each imported row is processed as an event. After the data is ingested, it can then be enriched, stitched to existing visitor profiles, and sent to other vendors.

There are two steps in setting up a file import as a data source:

  1. Set up a file import data source
    Setting up a file import data source requires that you prepare a CSV file, configure column mappings, and assign a file transfer service.
  2. Upload files to your file transfer service
    After you set up your data source, upload your CSV files to your file service. File uploading is done outside of Tealium.

CSV Column Mapping Configuration

The column mapping configuration determines the event attributes that correspond to each column in the CSV file. The column names are often different from the attribute names in the Customer Data Hub, so this mapping ensures that the data is imported properly. For example, a CSV file might have a column named postalCode, but the matching event attribute is named customer_zip, so a column mapping is needed to associate the two.

Mappings can be configured based on an existing event specification or as a custom event.

Event Spec Mapping

When a file import uses an event specification mapping, the event attributes are pre-selected and you specify the CSV column name that corresponds to each attribute. Each row is processed as an event of the selected specification, for example tealium_event = "purchase".

Custom Mapping

When a file import uses a custom event mapping, you specify the event attribute that corresponds to each CSV column. Each row is processed as an event with the following event identifier:

tealium_event = "imported"

File Transfer Service

The file transfer service is a secure location where you upload the files to be imported. The following file transfer services are supported:

  • Amazon S3 (Tealium bucket or your own bucket)

    Tealium uses VPC endpoints to access Tealium S3 buckets directly through the AWS network. Use IAM credentials to allow Tealium access your own bucket via our VPC Internet Gateway. 

  • Microsoft Azure File/Blob Storage
    Supports the following authentication methods:
    • Password
    • Upload Private Key File
    • Generate Key Pair

If you use your own file transfer service, be sure to have the connection details ready before proceeding.

File Import Process

After a file transfer service is assigned to a file import data source, you upload files to the service. The system then uses the following order of operations:

  1. Check for New Files
    The system checks the file transfer service for new files every 10 minutes.
  2. Copy New Files
    When a new file is detected, it is copied from the file transfer location and processed in the Customer Data Hub.
  3. Match Filename Prefix to File Import Data Source
    The prefix of the filename is used to identify which file import data source to use when importing the data in the file.
  4. Process Files
    The header line is read to identify the attributes being ingested. From there, the following processing is performed:
    • Visitor Lookup
      The visitor ID is used for a lookup of the visitor record in AudienceStream. If an existing visitor record is not found, a new one is created.

      Grouping rows with the same visitor ID increases the speed of the import.

    • Attribute Enrichment
      The visitor record is enriched according to the attributes imported and the existing enrichments in your account.

Setting Up a File Import Data Source

Before you begin, create a sample CSV file (less than 1,000 rows) to use during the setup process. THis sample file is used to automatically detect the column mappings.

Navigate to Sources > Data Sources and click + Add Data Source. Complete the following steps to set up a file import data source:

  1. Add a File Import Data Source
  2. Upload a Sample File
  3. Configure Column Mappings
  4. Configure a File Service
  5. Summary

1. Add a File Import Data Source

Complete the following steps to add a file import data source:

  1. In the sidebar, select Sources > Data Sources.
  2. Click + Add Data Source.
  3. Under Categories, click File Import and select the File Import platform.
  4. In the Name field, enter a unique name related to the file type and click Continue.

2. Upload a Sample File (Optional)

After you have added a file import data source, you upload the sample CSV file. During this upload, column names in the file are automatically detected and pre-configured on the column mappings screen. This step is optional and can be skipped by clicking Continue to go directly to the column mappings screen.

Complete following steps to upload a sample file:

  1. Select a file.
    • Click [Choose a file...], select the file, and click Open, or
    • Drag and Drop a file directly onto the upload area.
      The Sample File preview screen displays the file contents in table format.
  2. Scroll through the sample file to verify that the columns and data look correct.
    The detected column names are listed in CSV format below the table and are pre-populated on the column mapping screen.
  3. If there are problems with the sample file, click Remove and try another file.
  4. Click Continue.

3. Configure Column Mappings

The Mapping screen is used to indicate the type of data being imported and how to map the CSV columns to event attributes.

Event Specification

Each row of the CSV file is processed as an event. Use the drop-down menu to select which specification matches the data in the file, otherwise select Custom. This selection determines the value of the tealium_event attribute during the import process. If Custom is selected, tealium_event is set to imported, otherwise it is set to the corresponding specification name, such as purchase.

Changing the selected specification resets the column mapping table.

Column Mapping Table

The column mapping table contains the following columns:

  • Column Label from CSV
    The columns names from the CSV (pre-populated from the sample file).
  • Date Formatter
    A date formatter for columns that contain date/time values.
  • Event Attribute
    The event attribute to receive the data from that column.
  • Sample 1 – Sample 3
    Values from the sample file.

Complete the following steps to map each column to an event attribute:

  1. Enter or select a column name from the file (if not pre-selected).
  2. Click the drop-down list in each Event Attribute column and select the event attribute to map to that column. Each CSV column heading must be unique.
    WhiteUI_File Import_Column Mapping.png
  3. Columns with date/time values must have a matching date format setting. Click the checkbox in the date column next to any column for which you need to customize the timestamp format. The default value is yyyy-MM-dd.
    An interactive menu displays that allows you customize the timestamp format for that column.
  4. Repeat the column mapping and date formatting steps for each column.
    If a column does not need to be mapped, click the red X to remove that column from the list.
  5. (Optional) Click the Enable Visitor ID Mapping in AudienceStream checkbox.
  6. Click Continue to advance to the Service Configuration tab.

4. Configure a File Transfer Service

The file transfer service is a secure location to which you upload your files for Tealium to retrieve them. If you are using your own file transfer service, ensure that you have the connection details ready before proceeding.

Complete the following steps to add and configure your file transfer service:

  1. Select a service from the Choose File Service drop-down list.
  2. (Optional) If you do not have a file service set up or want to add a new service, click + Add New File Service. If you have an existing configuration, select the configuration and skip to step 3.
    To set up a new file service configuration, complete the following steps:
    1. In the Configure File Service screen, enter a name in the File Service Name field.
    2. From the Service drop-down list, select a supported file service from the list.
    3. For services other than My SFTP Connection, enter the credentials necessary to access the selected service. If you selected Tealium S3 Bucket, the necessary credentials are generated for you automatically.
    4. If you selected My SFTP Connection, choose an Authentication Method.
      In addition to Password authentication, Upload Private Key File, and Generate Key Pair are supported.
      • If you selected Password, enter the password and click Test File Source Connection.
      • If you selected Upload Private Key File, choose a file from the list and click Test File Source Connection.
      • If you selected Generate Key Pair, click Generate and click Test File Source Connection.
    5. Select a Region.
    6. Click Save to save this service for future use.
  3. In the Service Configuration screen, enter the File Name Prefix, such as ordercompleted. The prefix of the filename is used to identify which file import data source to use when importing the data in the file. In this example, the prefix is used to create a CSV file titled ordercompleted_VERSION.csv.WhiteUI_FileImportDataSource_ConfigureFileService.png
  4.  Use the slider to Enable or Disable Service Processing for File Import.
    • When set to ON, the file service is checked and files are processed every 10 minutes.
    • Set to OFF if you are not ready to start processing files. You can enable the service later.
  5. Click Continue to advance to the Summary tab.

5. Summary

In this final step, you view the summary, make any needed corrections, and then save and publish your profile.

  1. View the Event Attribute Mappings.
  2. To make changes, click Previous twice to return to the Mapping tab to update.
  3. Click Finish to exit the configuration dialog.
    Your new data source now displays in the list of data sources.
    WhiteUI_FileImportDataSources_Now Added.png
  4. Click Save/Publish to save and publish your changes.

Uploading a File to a File Service

To upload a file to a file service, a third-party application is required to initiate the upload. Though you may use any client for this purpose, Tealium recommends Cyberduck because it's free and supports FTP and Amazon S3.

When multiple files are uploaded at the same time using SFTP or S3, the files are processed in alphabetical order based on the file name.

Using Cyberduck to Upload a File via FTP or Amazon S3

  1. Launch Cyberduck.
  2. Create a new connection and give it a title.
  3. Select the file transfer service used in your File Transfer Service configuration.
  4. Enter the credentials (username and password) for your service.
    • My FTP Connection and My SFTP Connection
    • Tealium S3 Bucket The credentials are pre-populated as an Access Key (username) and Secret Key (password).
  5. Provide other details, such as the server, path, port, etc, required by your service. To learn more about accessing third-party S3 buckets using Cyberduck, see the Amazon S3 article in the Cyberduck documentation.
  6. Save the connection.

You are now ready to upload files by dragging and dropping your CSV files into Cyberduck. If this is a new S3 bucket (an empty bucket), see Uploading a File to an Empty S3 Bucket for instructions on uploading your first file.

When using SFTP, the file must be located in the root folder. Files cannot be located in the sub-folders for an SFTP connection.

Using the Amazon Command Line Interface to Upload and Manage Files

To install the Amazon Command Line Interface (CLI), see Installing, updating, and uninstalling the AWS CLI in the Amazon documentation. For information on configuring AWS CLI, see Configuration Basics.

When you call aws configure, you are prompted for your Access Key and Access Key ID (you can leave Region Name and Output Format blank).

After you configure the CLI, you can make queries using the s3api method, as shown in the following CLI examples:

List all Objects in the Root Folder

aws s3 ls s3://

Copy a File into a Folder

aws s3 cp local_file.csv s3://

Remove a File from a Folder

aws s3 rm s3://

File Upload Errors

When a file process failure occurs, the file is ignored. The system does not attempt to process the file again. The most common reasons for failure are:

  • The CSV file is improperly formatted and is therefore not a valid CSV file
  • The CSV file must be in UTF-8 format with no BOM encoding
  • Column names used in the file definition do not exist in the file
  • A column name is used more than once in the File Service configuration

Uploading a File to an Empty S3 Bucket

When an S3 bucket is first created, it's empty. If you try to access an empty S3 bucket, the following message may be displayed:

Failure to read attributes of ACCOUNT-PROFILE

Before uploading any CSV files, use the following aws s3api command to upload a file into the empty bucket:

aws s3api put-object --bucket <bucket> --key <key> --body <body>
  • The bucket value is the region domain in the format
  • The key value specifies the filename you want to assign to the file, including the file prefix. 
  • The body value specifies specifies the file location on the local system. 

For example:

aws s3api put-object --bucket \
--key bulk-downloader/ACCOUNT-PROFILE/test_fileimp_01.csv \
--body ./test_fileimp_01.csv

For more information, see AWS Command Line Interface (CLI): How to Connect to Your S3 Bucket and Other Common Commands.

Check Import Status

After the file import data source is set up and has begun importing files, you can view the import activity from the Status tab by expanding the data source in the Data Sources Dashboard. This tab shows a rolling month report, with details about how many rows were processed, with and without errors, in graphical format. A tabular view displays directly underneath the graph.

You can also see events imported from files on the Live Events screen by navigating to Live Events and selecting the File Import data source you want to view from the drop-down list.

If an error occurs, hovering over the red area of a bar in the graphical display displays a breakdown of the errors. Tooltips show common errors with a link to detailed report of all errors hosted in S3. Hover over any bar in the display to view details about the number of rows processed.

If AudienceStream fails to copy a file using the file transfer service, it will retry in 10 minutes.