Recall at K (R@K)

Short Definition

Recall at K measures how many relevant items are captured within the top K predictions.

Definition

Recall at K (R@K) quantifies the fraction of all relevant items that appear in the top K ranked predictions. It evaluates how well a model retrieves relevant items within a limited result set.

R@K complements Precision at K by measuring coverage rather than purity.

Why It Matters

High recall at K is essential when missing relevant items is costly, even if some incorrect items are included. This is common in:

  • medical screening
  • legal document retrieval
  • anomaly detection
  • safety monitoring

R@K reflects how much relevant information is surfaced early.

How It Works (Conceptually)

  • Identify all relevant items in the dataset
  • Check which appear in the top K predictions
  • Compute the fraction retrieved

R@K measures coverage under ranking constraints.

Mathematical Definition

R@K = (Number of relevant items in top K) / (Total number of relevant items)

Minimal Python Example

Python
recall_at_k = relevant_in_top_k / total_relevant


Common Pitfalls

  • Using R@K without precision context
  • Comparing R@K across datasets with different relevance counts
  • Choosing K values unrelated to real usage
  • Ignoring ranking ties and ordering ambiguities

Related Concepts

  • Precision at K (P@K)
  • Recall
  • Precision–Recall Curve
  • Ranking Metrics
  • Evaluation Metrics