Anonymous
Definitions:
- Precision: Precision measures the accuracy of the positive predictions made by the model. It is the ratio of correctly predicted positive instances to the total instances predicted as positive.Precision=True Positives (TP)True Positives (TP)+False Positives (FP)\text{Precision} = \frac{\text{True Positives (TP)}}{\text{True Positives (TP)} + \text{False Positives (FP)}}Precision=True Positives (TP)+False Positives (FP)True Positives (TP) In simpler terms: Out of all the predictions where the model predicted positive (or relevant), how many were actually positive (or relevant).
- Recall: Recall (also known as sensitivity or true positive rate) measures the ability of the model to identify all relevant (positive) instances. It is the ratio of correctly predicted positive instances to the actual total number of positive instances.Recall=True Positives (TP)True Positives (TP)+False Negatives (FN)\text{Recall} = \frac{\text{True Positives (TP)}}{\text{True Positives (TP)} + \text{False Negatives (FN)}}Recall=True Positives (TP)+False Negatives (FN)True Positives (TP) In simpler terms: Out of all actual positive instances, how many were correctly predicted as positive.
Key Difference:
- Precision focuses on the quality of positive predictions: "When the model says something is positive, how often is it right?"
- Recall focuses on the quantity of positive instances correctly identified: "Out of all the actual positives, how many did the model find?"
Example Scenario:
- Precision would measure the proportion of emails identified as spam that are actually spam.
- Recall would measure the proportion of actual spam emails that were correctly identified by the model.