[Be the best version of yourself with Atomic Ideas 2.0 - where groundbreaking books come alive as addictive audio chats. Bite-sized brilliance, fresh ideas daily. Your pocket genius is here!
Do consider becoming a paid subscriber to gain the maximum out of AtomicIdeas]
Atomic ideas from the recently launched book, AI Snake Oil by award winning researchers Arvind Narayanan and Sayash Kapoor (BTW, we are the first ones globally to bring you a very deep summary + audiobook of this book!)
Confused about AI and worried about what it means for your future and the future of the world? You’re not alone. AI is everywhere―and few things are surrounded by so much hype, misinformation, and misunderstanding.
By revealing AI’s limits and real risks, AI Snake Oil will help you make better decisions about whether and how to use AI at work and home
In AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor (also run the newsletter AI Snake Oil) cut through the confusion to give you an essential understanding of how AI works and why it often doesn’t, where it might be useful or harmful, and when you should suspect that companies are using AI hype to sell AI snake oil―products that don’t work, and probably never will.
While acknowledging the potential of some AI, such as ChatGPT, AI Snake Oil uncovers rampant misleading claims about the capabilities of AI and describes the serious harms AI is already causing in how it’s being built, marketed, and used in areas such as education, medicine, hiring, banking, insurance, and criminal justice.
The Double-Edged Sword of Predictive AI
While generative AI shows promise, predictive AI often falls short of its claims. Companies tout the ability to predict outcomes like job performance or criminal behavior, but evidence suggests these tools are frequently inaccurate and can exacerbate inequalities. For instance, a healthcare AI tool meant to predict patient needs actually reinforced racial biases in care.
The authors argue that many predictive AI applications are "snake oil" - products that don't work as advertised.
Listen to the audiobook summary by AtomicIdeas.AI
The Need for AI Literacy
The book aims to provide readers with the tools to critically evaluate AI claims and identify "snake oil." The authors argue that understanding AI is crucial for navigating its growing influence in society.
"We think most knowledge industries can benefit from chatbots in some way. We use them ourselves for research assistance, for tasks ranging from mundane ones such as formatting citations correctly, to things we wouldn't otherwise be able to do such as understanding a jargon-filled paper in a research area we aren't familiar with."
How Predictive AI Goes Wrong
The False Promise of Predictive Accuracy
Many companies claim their predictive AI tools can accurately forecast outcomes like job performance or criminal behavior. However, these claims often fall apart under scrutiny.
The authors cite examples like COMPAS, a tool used in criminal justice that claims to predict recidivism but performs only slightly better than random guessing. They argue that the complexity of human behavior and social contexts makes accurate prediction extremely difficult, if not impossible, in many cases.
The Dangers of Automated Decision-Making
When predictive AI is used to make consequential decisions about people's lives, the risks of harm increase dramatically. The authors describe cases where AI systems have denied healthcare, incorrectly flagged individuals for welfare fraud, or unfairly assessed job candidates.
"Predictive AI is quickly gaining in popularity. Hospitals, employers, insurance providers, and many other types of organizations use it. A major selling point is that it allows them to reuse existing datasets that have already been collected for other purposes, such as for bureaucratic reasons and record keeping, to make automated decisions."
Reinforcing and Amplifying Biases
Predictive AI often perpetuates and exacerbates existing societal biases. The authors discuss how these systems can disproportionately harm marginalized groups.
For example, a healthcare AI tool meant to identify patients needing extra care actually recommended lower levels of care for Black patients compared to white patients with similar health needs. This occurred because the AI was trained on historical data that reflected existing disparities in healthcare spending.
The Illusion of Objectivity
Many organizations adopt predictive AI with the belief that it will make decision-making more objective and fair. However, the authors argue that this is often an illusion. They explain that AI systems inherit the biases present in their training data and in the societies that produce that data.
Additionally, the opacity of many AI systems can make it difficult to identify and address these biases. The authors caution against the "automation bias" - the tendency to over-rely on automated systems even when they make errors.
Why Can't AI Predict the Future?
The Limits of Computational Prediction
Despite advances in computing power and data collection, accurately predicting complex social outcomes remains elusive.
The authors explore historical attempts at prediction, from weather forecasting to social simulations, highlighting the inherent challenges. They argue that while some phenomena can be predicted with reasonable accuracy, many social and individual outcomes are fundamentally unpredictable due to their complexity and the role of chance events.
The Fragile Families Challenge: A Case Study in Prediction Failure
The authors discuss a large-scale study called the Fragile Families Challenge, where researchers attempted to predict children's life outcomes using a vast amount of data. Despite having access to thousands of data points and employing advanced AI techniques, the predictions were only slightly better than random guessing.
..none of the models performed very well—the best models were only slightly better than a coin flip. And complex AI models showed no substantial improvement compared to the baseline model consisting of just four features.
The Role of Randomness and Complexity in Human Life
The duo emphasizes how random events and complex interactions can dramatically shape individual lives in ways that are impossible to predict.
The authors argue that many life outcomes are influenced by small initial advantages that compound over time, as well as unpredictable "shocks" like accidents or unexpected opportunities. They suggest that this fundamental unpredictability challenges the very premise of many predictive AI applications.
The Challenges of Predicting Aggregate Outcomes
While predicting individual outcomes is extremely difficult, the authors also explore the challenges of predicting aggregate social phenomena like economic trends or disease outbreaks. They discuss examples like the COVID-19 pandemic, where even short-term predictions were highly unreliable due to the complex interplay of biological, social, and political factors. The authors argue that many predictive failures stem from underestimating the role of rare, high-impact events that can dramatically alter the course of social systems.
Debunking AI Doomsday Scenarios
The authors challenge popular narratives about the existential risks posed by advanced AI. They argue that many of these scenarios, such as the idea of a superintelligent AI taking over the world, are based on flawed assumptions and misunderstandings of AI technology.
"We think AGI is a long-term prospect, and that society already has the tools to address its risks calmly. We shouldn't let the bugbear of existential risk distract us from the more immediate harms of AI snake oil."
The Ladder of Generality
Instead of viewing AI development as a sudden leap to artificial general intelligence (AGI), the authors propose the concept of a "ladder of generality."
This framework describes AI progress as a series of incremental steps, each increasing the flexibility and capability of AI systems. They argue that this perspective provides a more realistic and nuanced understanding of AI development, countering alarmist narratives about exponential or runaway AI growth.
Share this post