*Statistics
of Deadly Quarrels* was written by Lewis Fry Richardson
and published in 1960. The book is notable for both its findings and for being
one of the first examples of quantitative methods being applied to the realm of
international relations. Richardson, a meteorologist by trade, turned his
revolutionary and now widely used weather forecasting methods toward the
outbreak of interstate conflict, hoping to find predictive variables by
analyzing the years between 1809 and 1950. Although Richardson failed in this
regard, he made a relatively shocking discovery: that outbreaks of war mirror
the occurrence rates of rare events like meteor strikes and earthquakes, or the
category of events known as “acts of God”.

The occurrences of these events and others, such as genetic mutations and customer arrivals, can be statistically modeled with Poisson distributions. The basic requirements of Poisson distributions are that events occur independently of each other and that the rate of occurrence is fixed over the period of time being studied. That the outbreak of war would follow a distribution that meets these assumptions raises interesting mathematical and philosophical questions that have yet to be resolved, and simultaneously assert and reject the value of forecasting attempts within this realm.

I
first learned of Richardson’s work while reading the June edition of *Harper’s
Magazine* on an airplane. The article’s author, Gary Greenberg, went on to
describe Richardson as a visionary who imagined large rooms filled with
“computers” (in this case people) who would perform calculations on incoming
data in real-time. In a fitting testament to Richardson’s foresight, I just so
happened to be *en route* to D.C., where I’d be spending my summer
interning as a quantitative geopolitical analyst. The era of big data had
arrived, and the $200 billion industry now reflects the popularization of the
belief that any question can be answered with enough observations and
computational power.

Out of curiosity, I decided to pick up where Richardson left off and conduct the same analysis on interstate conflicts through the present day. Specifically, I wanted to compare the frequency of *n* number of occurrences per year against that expected in a Poisson distribution. Thankfully, the task is much easier today than it would have been 50 years ago. There would be no monotonous paging through encyclopedias or lengthy calculations by hand for me. After a relatively simple Google search, I was able to get the data I needed from the UCDP/PRIO Armed Conflict dataset, which provided me with well-coded observations from 1946 through 2009. (In order to avoid overlap and any resulting bias, I only looked at the years from 1952 and on.) And, 60 lines of code later, here are the results:

wars started in a given year | count (observed) | count (expected) | proportion (observed) | proportion (expected) |

0 | 30 | 30.65 | .52 | .53 |

1 | 21 | 19.55 | .36 | .35 |

2 | 5 | 6.24 | .09 | .11 |

3 | 2 | 1.33 | .03 | .02 |

To summarize the table, there were 30 years where no new conflicts were started, 21 years where one conflict started, five years where two conflicts started, and two years where three conflicts started. As can be seen by comparing the values of the expected and observed columns, the distribution of actual conflict outbreaks mirrored that of a Poisson distribution. This was verified at the 95% confidence level using a Yate’s corrected Chi-Square goodness of fit test. From the results, it appears that Richardson’s finding continues to remain relevant as we enter the new millennium.

View my code on GitHub.

#### Sources

- “Empty Statistics” by John Vos
*Computing Science: Statistics of Deadly Quarrels*by Brian Hayes*Basic Business Statistics: Concepts and Applications (10th Edition)*by David Levine and Mark Berenson- Yate’s corrected Chi-Square test
- “Safety in Numbers” by Gary Greenberg