There are a number of statistical principles that are perhaps more honored in the breach than in the observance. For fun I am going to name a few, and show why they are not always the “precision surgical knives of thought” one would hope for (working more like large hammers).
The litany of complaints
A few things that statistical tyros hope are inviolate laws (which would allow them to avoid additional reading, thinking, and experiments) include:
- “You can only use unbiased estimators.” That would be nice, but it would eliminate a lot of lower variance and fully correct Bayesian methods (example). Unbiased estimators are very important and extremely important when you intend to aggregate or “roll up” estimates (example), but they are not the only possible estimators.
- “You can only use admissible estimators.” This is essentially the Bayesian revenge round for the last rule: it roughly eliminates all non-Bayesian methods (see “complete class theorems”). Among other things you lose ordinary least squares as the performance of the fairly perverse James–Stein estimator shows OLS to be inadmissible.
- “You can only use deterministic estimators.” This loses any sort of randomized algorithm, non-unique optimization, or sampling method. No MCMC estimators, no numerical integration, and no Gibbs sampling. In particular many permutation tests have to choose between being fully deterministic and statistically efficient (averaging over all possible permutations), or achieving computational efficiency (sampling or using additional asymptotic estimates).
- “You can only use completely statistically efficient estimators.” That is interesting, but it would eliminate a number of fascinating estimators including Wald’s paired test procedure (which is statistically inefficient as it uses an observed ordering instead of averaging over all possible orders).
Really you don’t want to give up any of the above properties if you do not have to (i.e., there is no reason to be sloppy or “leave money on the table”). But it is pure gamesmanship (or statsmanship) to have bring these complaints out before looking at the problem, data, and actual methodology.
Most significant techniques involve trade-offs and don’t have the luxury of obeying every possible “a priori obvious law” simultaneously.
Many of the above complaints come up in the unending Bayes/Frequentist wars.
In this light: one of the statistics authors I follow had an interesting comment I’d love to find again (I lost the reference). Roughly the comment implied: while Frequentist confidence intervals can be correctly applied in more situations than Bayesian credible intervals can, the Frequentist analysis is only answering a useful question in the situations where the Bayesian credible interval analysis could also be correctly applied.
I like the above sentiment and have some suspected authors/bloggers in mind- but don’t want to mis-attribute this thought. Anyone remember a link?