Metric learning is a training approach in which a model learns a similarity space rather than only learning to assign class labels. Instead of just deciding what category an image belongs to, the model learns how close or far apart different examples should be in an embedding space.
Why It Matters
Metric learning matters because many AI tasks are really retrieval problems. A user may want the most similar product photo, the closest matching medical case, the most visually related wildlife sighting, or the nearest face embedding. In those settings, what matters is not only whether the model can classify an item correctly, but whether it places similar items near one another in a way that supports search and ranking.
How It Works
Common approaches use triplets, pairs, or contrastive examples. The model may see an anchor item, a positive match, and a negative example, then learn to pull the positive closer and push the negative farther away. The exact loss can vary, but the principle stays the same: similarity should become something the model is explicitly trained to represent.
Where You See It
Metric learning shows up in face recognition, product search, re-identification, recommendation, and visual search. It also overlaps with transfer learning and self-supervised learning, since strong pretrained models are often later adapted with similarity-focused objectives.
Related Yenra articles: Content-Based Image Retrieval, Animal Tracking and Conservation, and Facial Recognition Systems.
Related concepts: Embedding, Visual Search, Transfer Learning, Self-Supervised Learning, and Approximate Nearest Neighbor Search.