- TLC Home Home
- Discussions & Ideas Discussions & Ideas
- Product Guides Product Guides
- Knowledge Base Knowledge Base
- Developer Docs Developer Docs
- Education Education
- Blog Blog
- Support Desk Support Desk
This article describes how to import CSV files as a data source, allowing offline data to become enriched and actionable within the Customer Data Hub. To be successfully imported, the CSV files must be formatted correctly. For information on the file format, see Preparing a CSV File for Import.
In this article:
Tealium AudienceStream CDP
The File Import feature provides the ability to import a CSV file to supplement online visitor profiles with valuable offline data. Using this feature, you can import a CSV file using a file transfer service, such as an Amazon S3 Bucket. The Tealium server connects to the file service, reads the file, and ingests the data. Each imported row is processed as an event. After the data is ingested, it can then be enriched, stitched to existing visitor profiles, and sent to other vendors.
There are two steps in setting up a file import as a data source:
The column mapping configuration determines the event attributes that correspond to each column in the CSV file. The column names are often different from the attribute names in the Customer Data Hub, so this mapping ensures that the data is imported properly. For example, a CSV file might have a column named postalCode
, but the matching event attribute is named customer_zip
, so a column mapping is needed to associate the two.
Mappings can be configured based on an existing event specification or as a custom event.
When a file import uses an event specification mapping, the event attributes are pre-selected and you specify the CSV column name that corresponds to each attribute. Each row is processed as an event of the selected specification, for example tealium_event = "purchase"
.
When a file import uses a custom event mapping, you specify the event attribute that corresponds to each CSV column. Each row is processed as an event with the following event identifier:
tealium_event = "imported"
The file transfer service is a secure location where you upload the files to be imported. The following file transfer services are supported:
Tealium uses VPC endpoints to access Tealium S3 buckets directly through the AWS network. Use IAM credentials to allow Tealium access your own bucket via our VPC Internet Gateway.
If you use your own file transfer service, be sure to have the connection details ready before proceeding.
After a file transfer service is assigned to a file import data source, you upload files to the service. The system then uses the following order of operations:
Grouping rows with the same visitor ID increases the speed of the import.
Before you begin, create a sample CSV file (less than 1,000 rows) to use during the setup process. THis sample file is used to automatically detect the column mappings.
Navigate to Sources > Data Sources and click + Add Data Source. Complete the following steps to set up a file import data source:
Complete the following steps to add a file import data source:
After you have added a file import data source, you upload the sample CSV file. During this upload, column names in the file are automatically detected and pre-configured on the column mappings screen. This step is optional and can be skipped by clicking Continue to go directly to the column mappings screen.
Complete following steps to upload a sample file:
The Mapping screen is used to indicate the type of data being imported and how to map the CSV columns to event attributes.
Each row of the CSV file is processed as an event. Use the drop-down menu to select which specification matches the data in the file, otherwise select Custom. This selection determines the value of the tealium_event
attribute during the import process. If Custom is selected, tealium_event
is set to imported
, otherwise it is set to the corresponding specification name, such as purchase
.
Changing the selected specification resets the column mapping table.
The column mapping table contains the following columns:
Complete the following steps to map each column to an event attribute:
yyyy-MM-dd
.The file transfer service is a secure location to which you upload your files for Tealium to retrieve them. If you are using your own file transfer service, ensure that you have the connection details ready before proceeding.
Complete the following steps to add and configure your file transfer service:
ordercompleted_VERSION.csv
.In this final step, you view the summary, make any needed corrections, and then save and publish your profile.
To upload a file to a file service, a third-party application is required to initiate the upload. Though you may use any client for this purpose, Tealium recommends Cyberduck because it's free and supports FTP and Amazon S3.
When multiple files are uploaded at the same time using SFTP or S3, the files are processed in alphabetical order based on the file name.
You are now ready to upload files by dragging and dropping your CSV files into Cyberduck. If this is a new S3 bucket (an empty bucket), see Uploading a File to an Empty S3 Bucket for instructions on uploading your first file.
When using SFTP, the file must be located in the root folder. Files cannot be located in the sub-folders for an SFTP connection.
To install the Amazon Command Line Interface (CLI), see Installing, updating, and uninstalling the AWS CLI in the Amazon documentation. For information on configuring AWS CLI, see Configuration Basics.
When you call aws configure
, you are prompted for your Access Key and Access Key ID (you can leave Region Name and Output Format blank).
After you configure the CLI, you can make queries using the s3api
method, as shown in the following CLI examples:
aws s3 ls s3://collect-REGION.tealium.com/bulk-downloader/ACCOUNT-PROFILE/
aws s3 cp local_file.csv s3://collect-REGION.tealium.com/bulk-downloader/ACCOUNT-PROFILE/
aws s3 rm s3://collect-REGION.tealium.com/bulk-downloader/ACCOUNT-PROFILE/local_file.csv
When a file process failure occurs, the file is ignored. The system does not attempt to process the file again. The most common reasons for failure are:
When an S3 bucket is first created, it's empty. If you try to access an empty S3 bucket, the following message may be displayed:
Failure to read attributes of ACCOUNT-PROFILE
Before uploading any CSV files, use the following aws s3api
command to upload a file into the empty bucket:
aws s3api put-object --bucket <bucket> --key <key> --body <body>
bucket
value is the region domain in the format collect-REGION.tealium.com
.key
value specifies the filename you want to assign to the file, including the file prefix. body
value specifies specifies the file location on the local system. For example:
aws s3api put-object --bucket collect-us-east-1.tealium.com \
--key bulk-downloader/ACCOUNT-PROFILE/test_fileimp_01.csv \
--body ./test_fileimp_01.csv
For more information, see AWS Command Line Interface (CLI): How to Connect to Your S3 Bucket and Other Common Commands.
After the file import data source is set up and has begun importing files, you can view the import activity from the Status tab by expanding the data source in the Data Sources Dashboard. This tab shows a rolling month report, with details about how many rows were processed, with and without errors, in graphical format. A tabular view displays directly underneath the graph.
You can also see events imported from files on the Live Events screen by navigating to Live Events and selecting the File Import data source you want to view from the drop-down list.
If an error occurs, hovering over the red area of a bar in the graphical display displays a breakdown of the errors. Tooltips show common errors with a link to detailed report of all errors hosted in S3. Hover over any bar in the display to view details about the number of rows processed.
If AudienceStream fails to copy a file using the file transfer service, it will retry in 10 minutes.
Copyright All Rights Reserved © 2008-2022