The Downward Spiral of Bias in Machine Learning

What motivated me to become a data scientist was to help improve decision-making with data. Machine learning models are one of the many tools available to do so. A naive description for supervised learning is that it takes data that represents reality and, with it, trains a model that makes predictions. In turn, if these predictions are deemed good enough, they can help improve decision-making. Simple! Right?

However, this process has several fatal assumptions:

⚖️ You can trust the data generation process. If it's biased, then the data can't be trusted either. And biased data can only make a biased model.

🤖 The choices you make as an ML practitioner, do not impact model bias. In fact, every choice you make, from feature selection to model selection and tuning, can potentially amplify bias or create bias where there was none.

📈 If the predictive performance is high, then the predictions can be trusted. Nope. What if the model always misclassifies in specific cases that disproportionally impact a group? What if the training data doesn't match the distribution of real-world conditions? What if the ground truth is biased? What if even when the predictions are good, they are misunderstood or misutilized by users in such a way they become biased decisions?

🔮 You want to predict based on the "reality". Don't you want to improve decision-making? If the past is biased, then why not predict for a better future!

Above picture by: Serg Masís, Featured image by: Mstyslav Chernov @ Wikimedia

When left to its own devices, each model can become a bias amplification machine. As a result, and in aggregate, AI systems can replicate human biases at an unprecedented scale. Fortunately, there are methods to mitigate model bias in all steps ML practitioners are involved: pre-processing methods for the data, in-processing for the model, and post-processing for the predictions.

Leave a Comment