Short Definition
Recall measures how many actual positive cases a model correctly identifies.
Definition
Recall is an evaluation metric that quantifies a model’s ability to find all relevant positive instances. It is defined as the ratio of true positive predictions to all actual positive cases (true positives and false negatives).
Recall answers the question: Of all the real positive cases, how many did the model detect?
Why It Matters
High recall is critical in scenarios where missing positive cases is costly or dangerous, such as medical screening, fault detection, or security monitoring.
A model with high recall minimizes false negatives, even if it produces more false positives.
How It Works (Conceptually)
- Consider all samples that truly belong to the positive class
- Measure how many of them were predicted as positive
- Ignore true negatives entirely
Recall focuses on coverage, not prediction purity.
Mathematical Definition
Recall = True Positives / (True Positives + False Negatives)
Minimal Python Example
recall = true_positives / (true_positives + false_negatives)
Common Pitfalls
- Interpreting recall without considering precision
- Maximizing recall at the cost of excessive false positives
- Ignoring class imbalance effects
- Confusing recall with accuracy