- TLC Home Home
- Discussions & Ideas Discussions & Ideas
- Product Guides Product Guides
- Knowledge Base Knowledge Base
- Developer Docs Developer Docs
- Education Education
- Blog TLC Blog
- Support Desk Support Desk
This article provides detailed information about model scoring techniques and formulas used to assign scores and rating to deployed models in the Tealium Predict ML product.
In this article:
An F1 Score is the typical metric used by experts to evaluate the quality of the type of Machine Learning models used by Tealium Predict. The F1 Score strikes a balance between two metrics: Precision and Recall.
To calculate the F1 Score, Precision and Recall values are input into the following formula:
F1 Score = 2 * ( (Precision * Recall) / (Precision + Recall) )
As an example, assume you have two colors of apples, red and green and your model seeks to predict which apples are red. If your model has high Precision, this means that the model is usually correct when it predicts an apple is red. If the model creates a list of apples which are supposedly red, high Precision means that this list is mostly accurate and that the apples on the list are actually red.
Using the same example, if your model has high Recall (Sensitivity), the model has the ability to to identify most of the red apples. A model with high recall does a good job of creating a thorough list of the red apples.
Using the same example of red and green apples, the following list describes expected results based on high or low Precision or Recall.
The ideal model clearly should have both high precision and high recall. The concept of potential trade-offs (between the volume of apples predicted to be red, and the accuracy of those predictions) is a recurring theme that impacts how machine learning models are used in the real world.
A Confusion Matrix is a key tool used to evaluate a trained model. During the training and testing process that runs automatically when you create or retrain a model, Tealium Predict attempts to "classify" the visitors during the Training Date Range into two groups: true and false. These two groups reflect whether a user actually did the behavior signalled by the Target Attribute of your model, such as made a purchase or signed up for your email list.
The Confusion Matrix allows you to easily view the accuracy of these predictions by comparing the true or false prediction value with the true or false actual value. There are four possible scenarios, as described quadrants description below.
This comparison is made possible by the fact that the model trains on historical data (the Training Date Range). Once your model is deployed, the scenario changes. If your deployed model makes a prediction for a particular visitor today and the prediction timeframe is "in the next 10 days", results are not available for up to 10 days to determine whether the value returns as true or false.
The following list describes the four quadrants of the Confusion Matrix:
You can use the values of the quadrants to calculate the two constituent parts of F1 Score (Recall and Precision).
The following list describes how the values are calculated:
The Confusion Matrix uses a threshold value of 0.5 to differentiate between predicted positive and predicted negative values.
The following list provides a descriptive reference of elements used in Tealium Predict modeling formulas referred to in the Confusion Matrix section and other portions of this article:
In Tealium Predict, the ROC/AUC (under the curve) is a performance measurement reported for a trained model in the Model Explorer. In industry terms, the ROC is a true positive rate calculated as the number of true positives divided by the sum of the number of true positives and the number of false negatives. The ROC describes how well a model predicts the positive class when the actual outcome is positive. The true positive rate is also referred to as Sensitivity.
The Receiver Operating Characteristics (ROC) curve and the area under this curve (referred to as AUC, for area under curve) are common tools in the machine learning community for evaluating the performance of a classification model.
The ROC curve shows the trade-offs between different thresholds and consists of a plot of True Positive Rate (y-axis) against the False Positive Rate (x-axis), as follows:
Ideally, the results allow you to distinguish between True and False classes. The model always predicts the correct answer.
The following example depicts a "perfect model" for Probability Distribution and the ROC curve:
For an extreme contrast, the following example depicts the Probability Distribution and ROC curve in scenarios where your model always predicts the wrong answer. Always labels True as False, and vice versa.
The following example depicts a poor scenario, which is defined as a model that is incapable of distinguishing between True and False classes. In this scenario, the Probability Distribution displays two large curves directly on top of each other.
In a realistic ROC curve, 0.5 < AUC < 1.0 displays smaller values on the x-axis of the plot to indicate lower false positives and higher true negatives. Larger values display on the y-axis of the plot to indicate higher true positives and lower false negatives.
When predicting a binary outcome, it is either a correct prediction (true positive) or not (false positive).
You can go to the Training Details panel for any version of any trained model to view a probability distribution of the predictions made by the model during training.
The two colored curves of this chart represent the distributions of true and false predictions that the model made during training. Since the model training process uses historical data and you know whether each visitor actually performed the target behavior, it is possible to test the model by comparing the predictions for historical visitors versus the actual outcomes. The purpose of this comparison is to set aside a portion of the training dataset as the test subset.
The probability distribution compares predictions against actual values for the visitors in the test subset. Visitors who were part of the True class (did perform the behavior) are displayed as part of the teal-colored curve and visitors who were part of the False class are part of the orange-colored curve.
The following list describes characteristics of an ideal probability distribution:
Copyright All Rights Reserved © 2008-2021