Bias in Machine Learning models could often lead to unexpected outcomes. In this brief video we will look at different ways we might end up building biased ML models, with particular emphasis on societal biases such as gender, race and age. Why do we care about Societal Bias in ML Models? Consider an ML model…
Tag: bias in ML
Bias in Machine Learning : How to measure Fairness based on Confusion Matrix ?
Machine Learning models often give us unexpected and biased outcomes if the underlying data is biased. Very often, our process of collecting data is incomplete or flawed leading to data often not being representative of the real world. In this article, we see why we need to measure fairness for ML models and how we…
What is Simpsons Paradox ?
Simpsons Paradox occures when trends in aggregates are reversed when examining trends in subgroups. Data often has biases that are might might lead to unexpected trends, but digging deeper and deciphering these biases and looking at appropriate sub-groups leads to drawing the right insights. Why does Simpson’s paradox occur ? Arithmetically, when (a1/A1) < (a2/A2)…