AI Snake Oil by Arvind Narayanan and Sayash Kapoor, published by Princeton University, is priced at $24.95. Recently, while researching oceans on other planets, I sought information on non-water bodies like liquid hydrocarbons. A Google search for 'oceans in the solar system not water' led to an AI-generated suggestion of Enceladus, a Saturn moon known for its saltwater subsurface sea, which was frustratingly off the mark.
This minor incident exemplifies AI's shortcomings, which are extensively discussed in 'AI Snake Oil'. The authors compile numerous instances where AI tools fail, affecting areas like academic success prediction, crime likelihood, disease risk assessment, and more. They delve into broader AI issues such as misinformation, unauthorized use of images, false copyright claims, deepfakes, privacy concerns, and social inequality reinforcement. The book concludes that human misuse of AI is a greater concern than AI's autonomous actions.
Despite rapid technological advancements, the authors aim to help readers discern effective AI from deceptive 'snake oil' AI. Narayanan, a Princeton computer scientist, and Kapoor, a Ph.D. student, collaborated after Narayanan's 2019 talk on recognizing AI snake oil went viral. They critique predictive AI, arguing it struggles with human behavior prediction and fails to address social media moderation effectively due to context and nuance challenges.
Generative AI, while acknowledged for potential smart use, is criticized for its lack of truth verification, potentially undermining critical thinking. The authors advocate for stronger societal oversight of the tech industry, emphasizing the need for better AI regulation.
This book is essential for policymakers, AI users, and general readers, highlighting AI's pervasive influence and the importance of cautious interaction.