Model drift is the decline in a model's performance over time because real-world conditions have changed. The input data may shift, user behavior may evolve, or the relationship between inputs and desired outcomes may no longer match what the model learned during training. Drift is one of the main reasons deployed AI systems need ongoing monitoring rather than one-time launch approval.
What Drift Looks Like
Sometimes drift is obvious, such as a recommendation system getting worse after a major behavior shift. Other times it is subtle. Accuracy may decline only for certain users, certain regions, or certain document types. A system may remain fluent and confident while becoming less reliable underneath.
Drift can come from many sources: new policies, different customer behavior, updated software environments, changing markets, adversarial adaptation, or changes in what counts as a correct outcome. That is why drift is not only a data science issue. It is a business reality issue.
How Teams Respond
Teams respond to drift through monitoring, re-evaluation, refreshed data, retraining, threshold changes, and sometimes human fallback. The key is to detect changes before they quietly damage decisions or user trust.
Model drift is a reminder that a model is not a finished truth machine. It is a living system component operating in a moving environment.
Related concepts: Overfitting, Machine Learning, Bias, Responsible AI, and Guardrails.