- TLC Home Home
- Discussions Discussions
- Documentation Documentation
- Knowledge Base Knowledge Base
- Education Education
- Blog Blog
- Support Desk Support Desk
As a member of the Data Science and Analytics team for Publicis Sapient, the work that I do is centered on the concept of digital transformation. As strategic partners to our clients, we work to transform and accelerate digital businesses worldwide. There are many tools and platforms that aid in our work, a key one being Tealium. I’ve been a certified Tealium practitioner for the past 3 years, creating end-to-end use cases for a diverse group of clients including telco, pharmaceutical, and retail. As I’ve had the opportunity to ideate on many types of use cases, it stands to reason that I’d be a big proponent for having a solid process for standing up new audiences.
When you first start building attributes and audiences in AudienceStream, you'll find that the process is pretty straightforward. That said, as you continue to build out new audiences and have more team members working in the account, you may quickly find you have some unintended conflicts in audience assignment.
It's common for teams to run into issues with unintended overlap or conflicting logic that will impact the success of a use case. This post will focus on how to create a solid test plan to ensure a succesful launch of your AudienceStream solution.
One method my team uses to maintain an AudienceStream account is to develop QA test plans prior to going live with any new audience. These plans allow us to test our "join audience" logic, "leave audience" logic, and see how a new use case might overlap or compete with another audience that we did not anticipate. In this post, I'll focus on how to create and use a test plan to ensure a smooth launch.
Draw It Out
Even before creating your QA test plan, one step that I've found particularly important to any use case is to whiteboard your audience as a process flow diagram. A diagram breaks down the logic of a use case as a series of decision points and outcomes. Having a visual representation of your logic helps to validate that your use case is fundamentally sound.
Create Your Test Plan
Your QA test plan should be tightly scripted--that is, the instructions should be written in such a way that a user without prior knowledge of the use case would be able to follow them and successfully replicate the intended result.
Ideally, you will want to write a series of tests to validate the following key scenarios:
What Your Test Plan Should Include
In my experience, I've found that documenting these test plans in a spreadsheet helps to track every last detail and keep everyone coordinated. And it needs to be accessible to your entire team!
I've also found that watertight QA plans should include the following elements:
What To Do When It All Goes Wrong
As you make your way through validating each scenario, don't be surprised if there are some unintended effects that causes your test to fail. In the event something goes wrong, figure out which step isn't working and write clear notes about your observations.
With my clients, this is where collaborative troubleshooting kicks into high gear. The first thing we do is confirm that other testers on the team are able to replicate the behavior. Replication is key as it will confirm that there is something wrong with the audience or attribute logic and not an issue isolated to how one person might be conducting the test.
Coordinate your team efforts by running a Trace in AudienceStream. Using trace to troubleshoot is great because you're able to inspect each event received and watch in real-time what is happening (or not happening) with your attribute and audience assignments. For example, perhaps there's a typo in your badge rule which is causing it not to be assigned—sometimes this is easier to see in trace than the badge screen. If you need to make a quick edit of a badge rule, save and publish, then rerun a new trace to confirm that you've been able to resolve the issue.
Doing this work upfront will save you time—you are much better serves to resolve bugs prior to launch instead of having to scramble to fix a faulty audience that is in production. It also helps to make sure that everyone—from stakeholders to the implementation team are all on the same page as far as how the audience creates fulfills the business need for the client.
I'm interested to hear whether or not QA test planning is a tactic that you use—share your comments below!
Copyright All Rights Reserved © 2008-2023