- TLC Home Home
- Discussions Discussions
- Documentation Documentation
- Knowledge Base Knowledge Base
- Education Education
- Blog Blog
- Support Desk Support Desk
01-13-2017 08:07 PM
@aniket_mane, Hey Ankit, I'm glad you contined this thread as my own project work has looped me back into this offline dataload again as well.
I just heard back from my account manager asking me to throttle my flow to no more than 100 requests/second. So roughly 1M / 3 hours. Not sure how that works for you, or if your account manager will come back with something different - but there you have it.
As a possibly related note: I was trying to upload somewhat large omnichannel files (3M +rows) and they were only getting partially processed, but with no errors recorded. I have a suspicion it was due to their processing limits.
@dan_george Any chance you guys are looking into scaling up your processing? I'm not sure how many of your customers are intending to start loading omni channel data soon - but you've got 2 participants on this thread alone that are looking to push your current limits.
Last - Ankit - remember that each server call to AudienceStream generally counts toward your contracted server calls/month. Or at least they do for me, so optimizing when and how data is sent may be useful to avoid overages, etc.
01-14-2017 04:37 PM - edited 01-14-2017 04:41 PM
These are questions better handled by your deployment manager or account manager. They can escalate following our internal processes.
This is a rhetorial question though: what is the need to import millions of rows of data with Omnichannel? I say this because typically only the initial import requires that many rows of data, and then moving forward only the delta of updates should be uploaded. Your digitial strategiest can help guide you on the right method to help ensure you're only uploading NEW updates which will also help with volume. I don't doubt that SHC may have a large volume of data, just something to think about.
Lastly, the vdata import being referenced leverages the same backbone that our Tealium Collect tag uses so it's pretty stout. It's more easily scalable than our Omnichannel process so something to keep in mind.
Cheers,
-Dan
01-16-2017 07:26 AM
@dan_george Thx for response. And I sort of agree:) We get significant data signals for easily millions of people/day - and while some signals are more significant than others, it's still going to be somewhat of a balancing game to determine what data gets sent, and how often. Depending on how strongly we rely on AS for martketing activation with various vendors, we'll want as real-time as we can get, which means in some cases sending a particular visitor's data (or updated data) multiple times a day.
As an example: Once for a particular scoring change from batch scoring change because of yesterday's activity, then another for some kind of offline interaction in the morning, then another because of a non-online purchase in the afternoon.
I don't think the current throttle of 100 events/sec will be limiting in any way for us in the short term. But just wanted to make sure you guys were able to scale relatively quickly if all of a sudden you had 5,10, 20, or 40 more clients with significant volume loading millions of events/day. (based the principle that for every instance of something you hear about, there's likely 8 or 10 instance you don't).
01-17-2017 10:12 AM - last edited on 01-17-2017 04:36 PM by kathleen_jo
Hello @Michael_Kim_shc
Thank you letting me know . Our account manager told us about the contracted server/month and we were planning to send 2 million every 3 hours via vdata enrichment. Also, thank you for letting us know about 100 req/sec. Currently we are working on a algorithm to have a lever or logic to limit the no of events that we send every 3 hours so that we don't go over contracted server calls and also not send blindly anything to AS.
Will keep you posted how it goes from our side.
Thanks,
Aniket
Copyright All Rights Reserved © 2008-2023