Algorithmic bias is systematic skew in an AI system that leads to unfair, distorted, or otherwise undesirable outcomes. The bias may come from data, labels, feature choices, optimization goals, deployment conditions, or feedback loops after launch. What matters is that the system's errors are not random. They fall unevenly on certain cases, groups, or decisions.
Where Bias Comes From
Bias can enter long before the model is trained. Historical data may reflect unequal treatment. Labels may encode subjective judgments. Important context may be missing. A model may also inherit bias by overfitting to proxies that correlate with sensitive attributes even if those attributes were removed directly.
This is why bias is not something that can always be fixed by deleting one column from a dataset. The causes are often structural and interconnected.
Why Bias Matters
Algorithmic bias matters most in settings where AI influences opportunity, safety, or dignity. Hiring systems, lending systems, health predictions, moderation decisions, surveillance, and educational tools can all produce harmful outcomes if bias goes unmeasured. Even low-friction consumer systems can create unfair or exclusionary experiences when they work badly for particular groups.
Bias also matters because it erodes trust. People are less likely to rely on AI systems they experience as arbitrary or unfair.
How Teams Respond
Addressing algorithmic bias usually requires measurement, better data practices, clearer goals, subgroup evaluation, human oversight, and methods such as Bias Mitigation. It is closely tied to AI Fairness and Model Fairness, but the emphasis here is on the source of skew rather than the broader fairness framework.
For readers learning AI, this term matters because it explains why harmful outcomes can emerge even when no one explicitly programmed the model to discriminate.
Related concepts: Bias, Bias Mitigation, AI Fairness, Model Fairness, and Responsible AI.