I’ve been citing this piece from Kevin Drum for years (I thought it was from his *CalPundit* days, but the version I found tonight is from *The Washington Monthly*): Margin of Error.

The idea of a “statistical tie” is based on the theory that (a) statistical results are credible only if they are at least 95% certain to be accurate, and (b) any lead less than the MOE is less than 95% certain.

There are two problems with this: first, 95% is not some kind of magic cutoff point, and second, the idea that the MOE represents 95% certainty is wrong anyway. A poll’s MOE

doesrepresent a 95% confidence interval for each individual’s percentage, but itdoesn’trepresent a 95% confidence for the difference between the two, and that’s what we’re really interested in.

This is some wonky statistical stuff, but you have a responsibility to understand it if you are relying on polls to tell you what the state of this race – or any race – is. Numeracy is nearly as important as literacy.

Part of being numerate is understanding your tools. This technique works for individual polls based on sample error, but is not appropriate for polls of polls, like Pollster or RCP, which have their own internal rules and checks on margins of error. Those polls have their own set of problems.