Fairness is often about consistent rules

Humans have this notion of fairness hardwired from a young age. However, a Machine Learning model's "rules" are not guaranteed to be consistent because we fit models to optimize predictive performance regardless of consistency.

💸 For instance, training data might suggest that banks customers are more likely to default on their loans at a younger age. However, because of less training data and outliers for some ages, the model learns that the probability of defaulting on loans doesn't consistently decrease with age. We call this non-monotonic.

📈 In mathematics, a monotonic function is one that always increases or decreases. And some ML model classes (such as XGBoost) will let you define monotonic constraints. In this #ml example, these constraints would force the model to learn a consistent relationship between loan defaults and age.

Featured image by: Alexander Supertramp

⚖️ So why is this fairer? Because similar people should receive equal treatment. The definition of what similar means depends on domain knowledge about a problem. For instance, we might know that higher grades for law students lead to passing the bar exam. So perhaps it's fair to ensure that a law school admissions model consistently favors students with better grades. Another desirable property is that it's easier to explain outcomes for a model with such constraints, and it generalizes better.

🔎 However, if we take a hard look, we'll notice deeper bias. For instance, maybe underprivileged students consistently get lower grades. We can't assume that optimizing predictive performance nor even equalizing outcomes will ensure fairness. Models have the power of changing the future for the better, so why just settle for predicting it based on a lousy past? Constraints serve as guardrails precisely to be leveraged in these cases.

Leave a Comment