Short Definition
Cost-sensitive learning incorporates different error costs into model training or evaluation.
Definition
Cost-sensitive learning is an approach in which different types of prediction errors are assigned different costs. Instead of treating all errors equally, the model or evaluation process is adjusted to reflect real-world consequences.
Costs may be applied during training, threshold selection, or metric evaluation.
Why It Matters
In many applications, false positives and false negatives have unequal impact. For example, missing a fraud case may be far more costly than incorrectly flagging a legitimate transaction.
Cost-sensitive learning aligns model behavior with business, safety, or ethical priorities.
How It Works (Conceptually)
- Define costs for each error type
- Incorporate costs into loss functions or decision rules
- Optimize model behavior with respect to total expected cost
- Evaluate performance using cost-aware metrics
The goal is to minimize harm, not error count.
Minimal Python Example
Python
expected_cost = fp * cost_fp + fn * cost_fn
Common Pitfalls
- Assigning arbitrary or poorly justified costs
- Ignoring uncertainty in cost estimates
- Optimizing costs on test data
- Assuming cost-sensitive learning replaces proper evaluation
Related Concepts
- Decision Thresholding
- Evaluation Metrics
- Precision
- Recall
- Class Imbalance
- Model Confidence