On Double Standards

I'm rather sympathetic to arguments about double standards. I've spent quite a bit of time on talk.politics.guns arguing with pro-gunners and one thing that I've noticed is the way that many of them uncritically accept even the most unlikely pro-gun claim while subjecting pro-control claims to the most searching scrutiny imaginable. For example, many pro-gunners believe that the Japanese count many homicides as suicides, despite there being no evidence whatsoever supporting this claim. While they will claim that the paper by Kellermann at al [19] that found an association between gun ownership and homicide should never have been published because it didn't control for any other factors. (When in fact it controlled for dozens of other factors.)

Anyway, Friedman's case that Teret has a double standard is based on Teret's sympathetic comments an a study by Wintemute et al [] that found that criminal activity was associated with a preference for the purchase of small, inexpensive handguns. Friedman argues that the Wintemute study is markedly inferior to Lott's work because:

  1. it fails to control for an obvious factor like poverty;
  2. it is statistically primitive;
  3. it is obviously intended for propagandistic purposes; and
  4. it takes much less care to control for relevant variables, to check results by rerunning the regressions on a variety of different assumptions, use all available data and report potential problems.

However, things are not as clear cut as Friedman believes. Firstly, there is one important way that Wintemute's study is superior--it is at an individual level rather than aggregating things into counties as Lott does. This is better since there is no reason to expect every part of a county to be the same. Secondly, one can well argue with the reason he gives:

  1. while they can't directly control for poverty. they cite studies that show that controlling for age and ethnicity does a good job of controlling for poverty.
  2. As far as I can tell, they used the appropriate statistical technique for their data (relative risks estimated by Mantel-Haenszel method).
  3. If the intent of the Wintemute study is propaganda, then so is the intent of Lott's study. Certainly Lott's study has been used for propaganda far more than Wintemute's.
  4. Wintemute's data set was more limited, but it seems a little unfair to criticize them for not controlling for things for which they could not obtain data. The sample size seems sufficiently large--a larger sample would have increased the cost of gathering the data for very little return. Wintemute et al devote about 30% of the text of their paper to discussing potential problems.

Nonetheless, although it is debatable whether Lott's paper is markedly better in some sense, it doesn't seem to be true that it is markedly worse. Hence it seems probable that Teret is operating a double standard.

We could perhaps find a better example of a double standard if we looked at a study that had a similar design to Lott's. A study by Cummings et al [] used a pooled time series design similar to Lott's to study the effect of laws that make gun owners criminally liable if someone is injured because a child gains unsupervised access to a gun. They found that the laws were associated with a 23% reduction in unintentional shooting deaths of children.

Now Steve Milloy's Junk Science site criticizes this study. Here's what Milloy says:

This was an ecologic epidemiology study, meaning the conclusion is based on very "macro" comparisons of groups of people. The study involved no data about individuals, just groups. Traditionally, these studies are only useful for forming hypotheses for further testing, not irrefutable facts.

In particular, no data was collected on compliance with these laws and the relationship of compliance to the decrease in injuries. There may have been fewer unintentional firearm-related injuries in states with safe storage laws, but this study assumed compliance with the laws and assumed that compliance is responsible for the decrease in injuries. A big assumption considering the result.

The reported 23% decrease in injuries is a pretty weak result-probably beyond the capability of the ecologic type of study to reliably detect. Even in the better types of epidemiology studies (i.e., cohort and case-control), rate increases of less than 100% (and rate decreases of less than 50%) are very suspect.

So how much stock can be put in a weak result based on inadequate data?

Now this criticism applies equally to Lott's study, only more so, since the crime decreases found by Lott were much less than 23%. (For the bit that reads ``assumed compliance with the laws'' you need to read ``assumed frequent encounters between criminals and permit holders''.)

Furthermore, elsewhere on his site Milloy gives us six tips on how to spot junk science

Lott gets five out of six on Milloy's junk science meter.

So what does Milloy say about Lott's study? Do you think he condemns it as ``a weak result based on inadequate data''? Does he inform visitors to his site that it is junk science? Follow this link to find out.

Tim Lambert