I made a ruby script which uses browser automation to pull in our utag.*.js files, which we can then commit to our local source control. It does this by creating a small html snippet with the loader and then from there makes requests to the correct utag files. It expects a load rule which allows this script to access all tags. If the load rule is not present, the utag.*.js may or may not be pulled in based on constraining load rules. When run as a job on a regular basis, it can be used to detect diff's and check in only when there is a publish.
I would like to open source this on github, if it is useful to others. Is it okay to do so? Any advice?
Hi Sriram, this sounds like a useful tool. I won't comment about open sourcing it, but I have one possible suggestion for enhancement: you can configure Tealium to create a ZIP file of all the active tags. This would prevent you having to work out which numbered files to capture.
You can activate this from the Publish Settings (on the Save / Publish popup dialog).
Then you could download the zip file, unzip it locally, and check all the files into git if anything has changed.
The script is agnostic of the utag.*.js. It creates a html with only the utag.js path for the configured envs and does a get. From the response it extracts the utag.*.js source paths and does a get on each of them.
Based on a config file, it tries to guess which tag the utag.*.js file is for and renames it thus - e.g. utag_SiteCatalyst.js. The guess is based on the tag's CDN url. If it cant do it, it just names it as utag_0.js, utag_1.js etc. I guess if this happens, we lose numbering.
Basically it behaves just like our site does, in a stripped down manner.
We have instrumented our load rules so the request can run against all tags.
Hope I am not missing anything.