Short Definition
Precision measures how many predicted positive cases are actually correct.
Definition
Precision is an evaluation metric that quantifies the accuracy of positive predictions made by a model. It is defined as the ratio of true positive predictions to all positive predictions (both true positives and false positives).
In other words, precision answers the question: When the model predicts “positive,” how often is it right?
Why It Matters
High precision is critical in situations where false positives are costly or harmful. Examples include spam detection, medical diagnosis, fraud detection, and alerting systems.
A model with high precision makes few false positive predictions, even if it misses some true positive cases.
How It Works (Conceptually)
- The model produces predictions
- Predictions classified as positive are examined
- Precision evaluates the correctness of those positive predictions only
- It ignores true negatives entirely
Precision focuses on prediction reliability, not coverage.
Mathematical Definition
Precision is defined as:
Precision = True Positives / (True Positives + False Positives)
Minimal Python Example
precision = true_positives / (true_positives + false_positives)
Common Pitfalls
- Interpreting precision without considering recall
- Using precision alone on imbalanced datasets
- Assuming high precision implies good overall performance
- Ignoring the effect of decision thresholds
- Confusing precision with accuracy