Short Definition
Model robustness measures resistance to noise and perturbations.
Definition
Model robustness refers to a neural network’s ability to maintain stable performance when inputs are noisy, shifted, or slightly altered. Robust models do not change predictions drastically in response to small, irrelevant variations.
Robustness is critical in real-world environments where data is imperfect.
Why It Matters
Fragile models fail outside clean training conditions.
How It Works (Conceptually)
- Train with diverse data
- Apply regularization
- Test under perturbations
Minimal Python Example
prediction_perturbed = model(x + noise)
Common Pitfalls
- Training only on clean data
- Ignoring edge cases
- Assuming accuracy implies robustness
Related Concepts
- Generalization
- Data Augmentation
- Adversarial Examples