Sensitivity and Specificity
Imagine your business is about to launch a new product. In anticipation of this, the marketing department works with a team of business analysts to identify which customers might be good candidates for a promotion. A model is built, and it identifies relevant prospects based on various rules and conditions. Now any such model is never 100% accurate, and so it might be useful to measure its ‘accuracy’ using two widely employed metrics. Before this we need to introduce some terminology:
- A True Positive (TP) is a test or prediction that says something is true, and turns out to be correct. So in our example, if our model says someone will buy a new product, and they do, this is a true positive. The model gave a positive prediction, and it turned out it was true.
- A False Positive (FP) is a test or prediction that says something is true, but it turns out not to be the case. In our example, this would be the case of saying that someone will buy a product, but they don’t.
- A True Negative (TN) is a test or prediction that says a proposition is false, and it does turn out to be false. If our model says a customer will not buy the new product, and they don’t, this would be a true negative.
- A False Negative (FN) is a test or prediction that says something is false, but it turns out to be true. So if our model says someone will not buy a product, but they do, this is a false negative.
So back to our two measures of test or model accuracy. The first one is called Sensitivity and measures the ratio of true positives to all positives. So remember, some of the negatives will be true (the model says various customers won’t buy, but they will). The formula for this is:
Sensitivity = TP / (TP + FN)
It tells us how accurately our test or model identifies positives.
The second measure is that of Specificity and it has the inverse role. It tells us how accurately our test or model will identify true negatives. The formula for this is:
Specificity = TN / (TN + FP)
In other words what fraction of all negatives (which includes true negatives and false positives), is true negatives.
We can put some flesh on this by using some numbers in our example. Say we have 1000 customers that we want to target with our new product. Our predictive model says that 600 will buy the product – and so it is also saying 400 will not. We launch our marketing campaign and the following facts emerge:
Out of the 600 predicted to buy, only 400 actually buy.
Out of the 400 predicted not to buy, 50 do buy.
True Positives = 400
True Negatives = 350
False Positives = 200
False Negatives = 50
Plugging the numbers in we get:
Sensitivity = 400 / (400 + 50) = 0.89
Specificity = 350 / (350 + 200) = 0.64
So our model is better at identifying the true positives than the true negatives. 89% of people who would buy were identified and 64% of people who would not buy were identified. These measures are widely used in medicine to characterize the effectiveness of diagnosis and treatment of conditions. They also lead to other useful constructs used in data mining and statistics.
Statisticians and other who wrap these concepts up in formulas and arcane jargon will cringe at my loose syntax here – too bad.