Expert Political Judgment: How Good Is It? How Can We Know?

by Philip E. Tetlock

Expert Political Judgment: How Good Is It? How Can We Know? by Philip E. Tetlock

Expert Political Judgment: How Good Is It? How Can We Know? by Philip E. Tetlock attempts to understand why experts’ judgments fail to predict future events and how to know when an anticipated event is certain to occur. It argues that expertise is limited and can be improved through constant practice and reflection. Tetlock identifies three distinct types of expert judgments: recognition-primed decisions (RPD), which involve taking into account both known facts and emotions in evaluating situations; ambiguity-tolerant decisions (ATD), which take into account unknown factors and involve greater cognitive activity; and intuitive probability estimations (IPE), which involve making predictions based on probability theory.

Rather than rely solely on experts for advice on highly volatile issues like politics and international affairs, Tetlock argues that we should look beyond them to include multiple perspectives from a variety of sources. He thinks of this approach as a “systems-laden view” and explains how it requires decisions to be made based on integrating facts and gut feelings from the experts’ experience, along with multiple perspectives from many sources.

Tetlock then examines the effects of two psychological processes — overconfidence and over-reactivity — on expert judgments. He contends that overconfidence contributes to short-term success but can lead to long-term failure, while over-reactivity can lead to ill-suited decisions. Through illuminating case studies, Tetlock demonstrates that when people trust their judgment too much, they can become victims of their own limit of understanding. He refers to the phenomenon as hubris and suggests that people can undermine their own judgment when they trust it too much.

One of the solutions to the problem of over-reactivity, he argues, is to find ways to increase the accuracy of expert judgment. To this end, Tetlock puts forward the idea of a metaconstraint — a rule to govern the way experts interact with decisions — and a metamonitor — a process or tool to help experts track their decisions. He also outlines five strategies to help improve decision-making, including seeking out additional information, gathering input from disinterested and knowledgeable sources, using careful experiment design, reframing problems in multiple ways, and using basic probability theory to aid in probabilities estimation.

Finally, Tetlock concludes that human judgment is fallible but that it can be improved with the help of tools, such as metaconstraints, metamonitors and newfound strategies. The book attempts to break down the complexity of a tough problem — how to improve people’s political judgment — and offers simple solutions that anyone can use to form better predictions and more accurate forecasts. While it can be difficult to judge the accuracy of expert political judgment, Tetlock’s book is a comprehensive and accessible guide to understanding and evaluating expert predictions.