Precision at K (P@K)

Short Definition

Precision at K measures how many of the top K predictions are correct.

Definition

Precision at K (P@K) is an evaluation metric commonly used in ranking, retrieval, and recommendation systems. It computes the fraction of relevant items among the top K ranked predictions produced by a model.

P@K focuses on performance at the highest-ranked outputs, rather than overall classification accuracy.

Why It Matters

In many applications, users only interact with the top few results. Precision at K reflects real-world usefulness in scenarios such as:

  • search engines
  • recommender systems
  • information retrieval
  • alert prioritization

A model with high overall accuracy but low P@K may still perform poorly in practice.

How It Works (Conceptually)

  • The model ranks items by predicted score
  • The top K items are selected
  • The number of relevant items among them is counted
  • Precision is computed for that subset only

P@K evaluates ranking quality at the top.

Mathematical Definition

P@K = (Number of relevant items in top K) / K

Minimal Python Example

Python
precision_at_k = relevant_in_top_k / k


Common Pitfalls

  • Comparing P@K across datasets with different relevance definitions
  • Ignoring recall entirely
  • Using large K values that dilute top-ranking focus
  • Treating P@K as a global performance metric

Related Concepts

  • Recall at K (R@K)
  • Precision–Recall Curve
  • Ranking Metrics
  • Evaluation Metrics
  • Decision Thresholding