Short Definition
Accuracy measures the proportion of correct predictions.
Definition
Accuracy is an evaluation metric defined as the fraction of predictions that are correct. It counts both true positives and true negatives and divides by the total number of predictions.
Accuracy provides a simple summary of overall correctness.
Why It Matters
Accuracy is intuitive and easy to compute, making it a common first metric. However, it can be misleading when classes are imbalanced or when error types have different costs.
Accuracy should rarely be used in isolation.
How It Works (Conceptually)
- Count all correct predictions
- Divide by total predictions
- Treat all errors equally
Accuracy measures correctness, not reliability or fairness.
Mathematical Definition
Accuracy = (TP + TN) / (TP + TN + FP + FN)
Minimal Python Example
Python
accuracy = correct_predictions / total_predictions
Common Pitfalls
- Using accuracy on imbalanced datasets
- Ignoring error asymmetry
- Treating accuracy as sufficient evaluation
- Comparing accuracy across different tasks
Related Concepts
- Evaluation Metrics
- Precision
- Recall
- Confusion Matrix
- Class Imbalance