Book Review: Causal Inference โ€” The Mixtape

๐ŸŽ„ One of the joys of the holiday season is to cozy up with a good book and read it pretty much non-stop! One year ago, that book was "The Book of Why". Now, it was "Causal Inference: The Mixtape". Do you notice a theme?

๐Ÿค– I've made it no secret that I love causal inference. The reason for this is that, as it is practiced today, machine learning is mostly brute-force and correlation-based โ€” Under ideal circumstances, it will work yet raise concerns with resource consumption, interpretability, and generalization. However, I believe the Bayesian/Causal approaches hold the answers to these problems, which is why I think every data scientist should put Causal Inference in their wheelhouse, at the very least, to gain another perspective!

๐Ÿ“– "Causal Inference: The Mixtape" by Scott Cunningham is an excellent overview of this massive research topic and a delightful read. I highly recommend it.

Featured image by: Newtons cradle animation book retrieved from Wikimedia Commons

๐Ÿ‘ The Good: It covers a lot of ground and explains it clearly with real-world examples. It starts with a comprehensive review of probability and regression and then devotes each subsequent chapter to essential topics in causal literature, from Direct Acyclic Graphs to Instrumental Variables. It starts each chapter with amusing lyrics from rap and hip-hop songs. And in addition to tons of mathematical formulas, tables and plots, it comes with ample code in R and Stata (economists' proprietary language of choice).

๐Ÿค” The Bad: No Python code. I think the author underestimates how much need there is for Causal Inference in industry where Python is king among data practitioners. However, this shouldn't discourage Pythonista data scientists since enjoying the book isn't dependent on the code. Also, since econometricians and research social scientists largely dominate the Causal Inference field, it's no wonder that examples are drawn from these areas. If you want to apply methodologies discussed in the book in industry, I suggest complementing other reading material adaptable for your use cases.

๐Ÿ™ˆ The Ugly: Not a big deal for experienced folks, but the provided code is not documented nor explained, making it hard to follow for any newcomer to R or Stata.

What frightens me the most

๐ŸŽƒ Spooky Halloween, folks! Seriously though, you know what haunts me: Climate Change. And today marks the beginning of the 26th UN Climate Change Conference (COP26) โ€” meaning it's been six long years since the Paris one. You know, the famous conference where countries made a covenant to take our impending doom seriously โ€” yet I'm afraid not much has changed.

๐ŸŒฑ So there's a lot that hinges on mitigating the effect of climate change, such as the lives and livelihoods of millions of people that live in low-lying river deltas and island nations or those that live in desertifying or water-stressed regions. As a data scientist in agriculture, I worry how this will lead to crop failures and even more aggressive plant diseases making more people die from hunger and malnutrition. Also, business as usual will cause thousands of ecosystems to collapse from swamps to corals to tundra to rainforests. And along with it, the extinction of up to one-third of the planet's species. The value of this damage is incalculable and, likely, irreversible. To me, the social and environmental dystopia that awaits should be enough to spring world leaders into action.


Featured image by: Shredthegnar365

๐Ÿ’ธ However, the perhaps more convincing argument to those holding the purse strings is that the global economy will suffer more significant losses if we do nothing. According to Morgan Stanley and International Renewable Energy Agency, the cost of reducing global warming is somewhere between $50 and $131 Trillion, respectively. To put these figures in perspective, the global economy is currently about $85 trillion. By 2050, the global economy could lose anywhere between $ 25 trillion, for a 2ยฐC increase, and $42 trillion, for a 3.2ยฐC increase. These are annual figures, so it would really pay for itself. Not to mention, these are conservative figures from Swiss Re (one of the world's largest reinsurance providers). Researchers from the Imperial College London and London School of Economics have estimates twice as large!

๐Ÿญ Anyway, let's hope COP26 leads to meaningful action so that the happy children collecting candies tonight don't have a bleak future contending with the dire consequences of inaction.

Fairness is often about consistent rules

Humans have this notion of fairness hardwired from a young age. However, a Machine Learning model's "rules" are not guaranteed to be consistent because we fit models to optimize predictive performance regardless of consistency.

๐Ÿ’ธ For instance, training data might suggest that banks customers are more likely to default on their loans at a younger age. However, because of less training data and outliers for some ages, the model learns that the probability of defaulting on loans doesn't consistently decrease with age. We call this non-monotonic.

๐Ÿ“ˆ In mathematics, a monotonic function is one that always increases or decreases. And some ML model classes (such as XGBoost) will let you define monotonic constraints. In this #ml example, these constraints would force the model to learn a consistent relationship between loan defaults and age.

Featured image by: Alexander Supertramp

โš–๏ธ So why is this fairer? Because similar people should receive equal treatment. The definition of what similar means depends on domain knowledge about a problem. For instance, we might know that higher grades for law students lead to passing the bar exam. So perhaps it's fair to ensure that a law school admissions model consistently favors students with better grades. Another desirable property is that it's easier to explain outcomes for a model with such constraints, and it generalizes better.

๐Ÿ”Ž However, if we take a hard look, we'll notice deeper bias. For instance, maybe underprivileged students consistently get lower grades. We can't assume that optimizing predictive performance nor even equalizing outcomes will ensure fairness. Models have the power of changing the future for the better, so why just settle for predicting it based on a lousy past? Constraints serve as guardrails precisely to be leveraged in these cases.

Trust is mission-critical

Artificial intelligence and data science must instill trust because good decision-making depends on it, which, in turn, drives better outcomes, reputation, and ultimately adoption.

๐Ÿค– So it's a core message of my book that if we are to replace or extend software systems with A.I. systems, we have to guarantee improvements in trustworthiness. And producing trustworthy insights and models is a constant struggle in data science.

Featured image by: MIT Sloan Mgmt Review

โš–๏ธ Interpretable Machine Learning (a.k.a Explainable AI) provides tools to address trust/ethical concerns organized in three levels: ๐—™๐—ฎ๐—ถ๐—ฟ๐—ป๐—ฒ๐˜€๐˜€, ๐—”๐—ฐ๐—ฐ๐—ผ๐˜‚๐—ป๐˜๐—ฎ๐—ฏ๐—ถ๐—น๐—ถ๐˜๐˜†, ๐—ฎ๐—ป๐—ฑ ๐—ง๐—ฟ๐—ฎ๐—ป๐˜€๐—ฝ๐—ฎ๐—ฟ๐—ฒ๐—ป๐—ฐ๐˜† โ€” collectively known as F.A.T. I like to see these in a pyramid structure because each level depends on the one beneath it. And there are interpretability tools to diagnose problems on each level as well as to fix each problem.

๐Ÿ“– It's an extensive area of active research with hundreds of methods. My book is an introduction with several forays into advanced topics.

Why does Interpretable Machine Learning matter?

It's hard to tell from all the hype, but Artificial Intelligence ๐—ถ๐˜€ ๐—ฏ๐—ฎ๐—ฟ๐—ฒ๐—น๐˜† ๐—ถ๐—ป ๐—ถ๐—ป๐—ณ๐—ฎ๐—ป๐—ฐ๐˜† ๐Ÿ‘ถ. But I'm hopeful that we can bring it into maturity.

โ˜” One of the most significant issues Machine Learning projects face is that models are ill-equipped to weather changing, adversarial, and ๐˜‚๐—ป๐—ฒ๐˜…๐—ฝ๐—ฒ๐—ฐ๐˜๐—ฒ๐—ฑ ๐—ฑ๐—ฎ๐˜๐—ฎ ๐—ฐ๐—ผ๐—ป๐—ฑ๐—ถ๐˜๐—ถ๐—ผ๐—ป๐˜€, much like planes facing storms and turbulence. But aircraft are robustly built and can overcome severe conditions both automatically and guided by experienced pilots. On the other hand, we know models must generalize well, but this proves to be an elusive property.

๐ŸŽ›๏ธ Ever since I wrote my book, I've been asked many times why I'm passionate about ๐—œ๐—ป๐˜๐—ฒ๐—ฟ๐—ฝ๐—ฟ๐—ฒ๐˜๐—ฎ๐—ฏ๐—น๐—ฒ ๐— ๐—ฎ๐—ฐ๐—ต๐—ถ๐—ป๐—ฒ ๐—Ÿ๐—ฒ๐—ฎ๐—ฟ๐—ป๐—ถ๐—ป๐—ด. I've responded that it's the instrument panel to pilot machine learning even in the worst conditions, from unfair to uncertain outcomes. So why wouldn't I prefer to have a complete instrument panel available? But, on the other hand, using predictive performance alone is like piloting with a single instrument!

Featured image by: WikiImages from Pixabay

โœˆ๏ธ Currently, flying is the safest mode of transportation. But for A.I., there is still a long way to go. For starters, we will need better no-code AutoML with human-in-the-loop and Interpretable M.L. built-in โ€” like cockpits for Machine Learning engineers. And methods that automatically audit and test models, much like commercial planes, undergo strict maintenance regimens. And given what I've seen currently being built by Auto ML, MLOps, and XAI startups and researchers, it seems like it's heading in this direction, so I have reasons to be hopeful that for most commercial use cases, A.I. someday will be the ๐˜€๐—ฎ๐—ณ๐—ฒ๐˜€๐˜ ๐—บ๐—ผ๐—ฑ๐—ฒ ๐—ผ๐—ณ ๐—ฑ๐—ฒ๐—ฐ๐—ถ๐˜€๐—ถ๐—ผ๐—ป-๐—บ๐—ฎ๐—ธ๐—ถ๐—ป๐—ด!

Not All is Lost from My Biggest Failure

๐Ÿฅ Recently, I found this box of frisbees in my parent's basement, and it's what's left of my biggest failure โ€” a search engine #startup. ๐—™๐—ฎ๐—ถ๐—น๐˜‚๐—ฟ๐—ฒ ๐˜€๐—ผ๐˜‚๐—ป๐—ฑ๐˜€ ๐—ต๐—ฎ๐—ฟ๐˜€๐—ต, but we learn by trial and error, so every mistake is an opportunity for growth.

Featured image by: fauve othon on Unsplash

๐Ÿ“Š One of the biggest lessons I learned was technical, and it had to do with the importance of #analytics and ๐—ฑ๐—ฒ๐—ฏ๐˜‚๐—ด ๐—ฎ๐—น๐—ด๐—ผ๐—ฟ๐—ถ๐˜๐—ต๐—บ๐˜€ for points of failure. It was then I realized that Machine Learning had a problem. After all, how do you debug ML models? This is how in 2017, I first stumbled upon Interpretable ML / Explainable AI research. Fast forward to 2020, and I was writing a book about it! And I spoke about this journey to San-Francisco-based A.I. startup entrepreneurs and workers.

๐Ÿ’ช ๐ผ๐‘› ๐ถ๐‘œ๐‘›๐‘๐‘™๐‘ข๐‘ ๐‘–๐‘œ๐‘›: the frisbees may have been the only tangible items, but my failure left behind stories, ideas, lessons, and a brand new perspective โ€” that has only made me ๐˜€๐˜๐—ฟ๐—ผ๐—ป๐—ด๐—ฒ๐—ฟ! As for the frisbees, they will find a new home with goodwill.

Opinion: Resource Constraints Foster Creative Solutions

I learned to program on this computer โ€” I was a child during the '80s ๐Ÿค“. It had a 4.77 MHz CPU, 256 KB RAM, monochrome display, and no hard drive, so you had to be creative to overcome ๐—ฟ๐—ฒ๐˜€๐—ผ๐˜‚๐—ฟ๐—ฐ๐—ฒ ๐—ฐ๐—ผ๐—ป๐˜€๐˜๐—ฟ๐—ฎ๐—ถ๐—ป๐˜๐˜€ โ€” not to mention exercise patience!

Above picture by: s3freak

We are ๐˜€๐—ผ ๐˜€๐—ฝ๐—ผ๐—ถ๐—น๐—ฒ๐—ฑ these days! To put it in context, most smartphones ๐Ÿ“ฑ have over 16 thousand times the RAM and more storage than would have fit in a room in the 80s. Add that to cheap, limitless cloud storage. I am not complaining.. That is great! However, I wonder how much does resource constraints foster software innovation โ€” and optimal code.

Today, trillion-parameters deep learning ๐Ÿค– models are pushing the envelope. Still, at the same time, it seems illogical that they represent the most ๐—ฒ๐—ณ๐—ณ๐—ถ๐—ฐ๐—ถ๐—ฒ๐—ป๐˜ ๐˜€๐—ผ๐—น๐˜‚๐˜๐—ถ๐—ผ๐—ป grounded in, for instance, biology, causal understanding of the world, or statistics. So before ushering in the age of quantum computing, I'm hoping we hit some resource limitations to focus more energy on more creative and intuitive solutions โ€” not to mention cost-effective.

What do you think? How much does an abundance of resources hinder or enable creative solutions?

Opinion: What Makes Us Care?

๐Ÿ‡จ๐Ÿ‡ท 7 years ago, I had a fantastic 4-day journey trekking through the ๐—–๐—ผ๐˜€๐˜๐—ฎ ๐—ฅ๐—ถ๐—ฐ๐—ฎ๐—ป ๐—ฟ๐—ฎ๐—ถ๐—ป๐—ณ๐—ผ๐—ฟ๐—ฒ๐˜€๐˜. On the 1st day, we had to cross a wild river with a metal basket hanging on a rusty rope. And I thought to myself, "what the hell have I gotten into?!".

Featured image by: Havardtl

๐Ÿ’ On that journey, I saw ๐—บ๐—ฎ๐—ป๐˜† ๐˜€๐—ฝ๐—ฒ๐—ฐ๐—ถ๐—ฒ๐˜€ of wildlife. I slept smelling the moss on the bark and wet ferns. And I woke up every morning to a majestic orchestra of birds, insects, monkeys, and frogs. It's also hard to realize the sheer scale of a rainforest when you are in it. On peaks, we could see the many green valleys we had crossed with Ceiba trees towering 17 stories high over the canopy!

๐ŸŒŽ We only have 36% of rainforests left. When I was born it was well over 50%. Today is #WorldRainforestDay and I thought Iโ€™d share a story of why I care. In #DataScience, we think ๐˜ง๐˜ข๐˜ค๐˜ต๐˜ด & ๐˜ง๐˜ช๐˜จ๐˜ถ๐˜ณ๐˜ฆ๐˜ด alone are convincing. But often it's the ๐˜ญ๐˜ช๐˜ท๐˜ฆ๐˜ฅ ๐˜ฆ๐˜น๐˜ฑ๐˜ฆ๐˜ณ๐˜ช๐˜ฆ๐˜ฏ๐˜ค๐˜ฆ & ๐˜ฆ๐˜ฎ๐˜ฐ๐˜ต๐˜ช๐˜ฐ๐˜ฏ๐˜ด that come with them that make things matter to us. I don't regret crossing the river on the basket because the journey the followed was life-changing. If I was an environmentalist before because of the facts I knew, now I had more conviction than ever that #nature had to be preserved for future generations!

Food Security & Climate Change

Today is ๐–๐จ๐ซ๐ฅ๐ ๐…๐จ๐จ๐ ๐’๐š๐Ÿ๐ž๐ญ๐ฒ ๐ƒ๐š๐ฒ. For me, it's a day of reflection.

๐Ÿฆ  After all, ๐—–๐—ข๐—ฉ๐—œ๐——๐Ÿญ๐Ÿต had a food safety-related genesis. Natural disasters and clearing land for urbanization + agriculture pushes wildlife closer to human settlements, which fuel pandemic risk.

๐ŸŒฝ Food safety is essential, no doubt, but it's intrinsically related to ๐—ณ๐—ผ๐—ผ๐—ฑ ๐˜€๐—ฒ๐—ฐ๐˜‚๐—ฟ๐—ถ๐˜๐˜† โ€” and this is what worries me the most. There's a need to feed another 2 billion mouths by 2050 and double food production to that end. So, as a data scientist in agriculture, I'm inspired to make my tiny contribution to improving food security.

๐ŸŒŽ However, ๐—ฐ๐—น๐—ถ๐—บ๐—ฎ๐˜๐—ฒ ๐—ฐ๐—ต๐—ฎ๐—ป๐—ด๐—ฒ can make our food production goals nearly impossible. Under a high-emission scenario, by 2050, a huge swath of the United States will suffer from a sizable decline in crop yields. However, this is offset by the fact that other areas of the country will experience an increase. Other countries won't be that lucky since they are entirely vulnerable given so many land challenges: desertification, land degradation, climate change adaptation, undernourishment, biodiversity, groundwater stress, and water quality (see IPCC for details). It's an existential threat to humanity, and we have only a few years to reverse this trajectory.

Above picture by: ProPublica and UK Met Office, Featured image by: Sven Lachmann from Pixabay

Book Review: Noise

When discussing human judgment and, by extension, algorithmic decisions, we are used to talking about ๐›๐ข๐š๐ฌ, but what about ๐ง๐จ๐ข๐ฌ๐ž?

Above picture by: Little, Brown Spark, Featured image by: Sophie Huiberts

๐ŸŽฏ Nobel Laureate แด…แด€ษดษชแด‡สŸ แด‹แด€สœษดแด‡แดแด€ษด and co-authors make a case for why we should pay close attention to it in their new book ๐‘๐‘œ๐‘–๐‘ ๐‘’: ๐ด ๐น๐‘™๐‘Ž๐‘ค ๐‘–๐‘› ๐ป๐‘ข๐‘š๐‘Ž๐‘› ๐ฝ๐‘ข๐‘‘๐‘”๐‘’๐‘š๐‘’๐‘›๐‘ก. It has some compelling stories to underpin how widespread the problem is in business and government with succinct illustrations. For instance, I love the target illustration and the error decompositions.

๐Ÿ“ข The book covers group dynamics such as information cascades, social pressure, group polarization as amplifiers of noise, and some cognitive #biases to boot. Lastly, it outlines noise mitigation strategies with decision hygiene, decision observers, and noise audits, which were BY FAR the biggest takeaways for me.

๐Ÿ˜’ However, if you are already familiar with the topic, the book will likely disappoint (at least a little). It can feel very repetitive and not getting into enough depth, and its entanglement with bias means it keeps referring to concepts covered in ๐‘‡โ„Ž๐‘–๐‘›๐‘˜๐‘–๐‘›๐‘” ๐น๐‘Ž๐‘ ๐‘ก ๐‘Ž๐‘›๐‘‘ ๐‘†๐‘™๐‘œ๐‘ค, as it was some long-lost final chapter. I still enjoyed it, regardless.

Have you read it? Do you want to?