One way to ask whether a theory is any good is to ask what it predicts. During COVID, a large group of social scientists ran a forecasting tournament to find out β€” teams competed to predict near-term changes in societal outcomes like mood, polarization, political ideology, discrimination, and life satisfaction, using whatever theoretical apparatus they wanted.

I contributed to the effort as part of the Forecasting Collaborative, led by Igor Grossmann and colleagues. The headline result is sobering: across nearly every outcome, expert forecasts were not reliably better than simple statistical benchmarks, and often worse Nat Hum Behav.

That finding fits uncomfortably well with the argument we make in Beyond Playing 20 Questions with Nature and in our work on common sense: if a field can’t generate reliable predictions about the phenomena it studies, it’s worth asking hard questions about how the field is cumulating knowledge in the first place. Forecasting tournaments are one of the cleaner tests we have β€” they are hard to game, easy to score, and directly tied to the kind of claims theories are supposed to support.