The use of omnichannel imports requires that your account be enabled for AudienceStream. The following topics will help you prepare your files for a successful import.
In this article:
The best data to import will help create a fuller and richer visitor profile of your customers. To get the most out of an omnichannel import the data should have a visitor identifier associated with each row of data. This will ensure that the corresponding visitor within the Universal Data Hub is enriched.
When multiple files are uploaded at the same time using SFTP or S3, the files are processed in the order of the upload timestamp.
The following data sets are recommended for use with omnichannel imports:
To ensure a successful import of the data it is critical to understand the expected file format. Omnichannel supports CSV (comma separated values) files where the first line of the file must be a header line that names the columns of the file. Each line after that represents an event or a visitor record and must contain at least one visitor identifier attribute.
Column names may not contain "#", "^", or whitespace characters.
Files can be compressed into zip files to minimize upload time. The system automatically detects and handle zip files.
Your CSV files must be named using the following format:
This format consists of the following two (2) parts:
||A unique identifier for groups of files that share the same CSV column names.|
||A unique identifier for a file within a prefix, usually a timestamp and (optionally) a version number|
The prefix of one set of files should not match the prefix of another set of files. For example, attempts to maintain a prefix of
store-transactions-returns will cause unexpected results because they share the same prefix: "store-transactions".
You can import up to 1,000,000 rows of data per day. The maximum number of rows per file is 100,000. Therefore, you can import the maximum amount of data by creating 10 files each with 100,000 rows for a total of 1,000,000 rows.
Smaller files, no bigger than 50MB, provide optimal import performance.
If a CSV file is larger than 100,000 rows it must be split into smaller files, each containing the header line from the original. In this example, a file named
master_purchases.csv has 325,000 rows of purchase data and is split into smaller files, each with 100,000 rows.
split command creates smaller files with a lexically ordered suffix using the characters a-z.
$ split -l 100000 -a 1 master_purchases.csv purchases- $ ls master_purchases.csv purchases-a purchases-b purchases-c purchases-d
In this example, the resulting four (4) new files contain the following number of rows to total 325,000:
|File Number||File Name||Number of Rows|
|Total Number of Rows||325,000|
cat commands are used to add the header line from the original file to the smaller files.
$ for file in purchases-* > do > head -n 1 master_purchases.csv > tmp > cat $file >> tmp > mv -f tmp $file > done
The data within your files should follow these guidelines to ensure a successful import:
|Valid Values||Invalid Values|
Once your files are ready, click one of the following links to proceed with configuring a file import or an omnichannel import.