I'm rather sympathetic to arguments about double standards. I've spent quite a bit of time on talk.politics.guns arguing with pro-gunners and one thing that I've noticed is the way that many of them uncritically accept even the most unlikely pro-gun claim while subjecting pro-control claims to the most searching scrutiny imaginable. For example, many pro-gunners believe that the Japanese count many homicides as suicides, despite there being no evidence whatsoever supporting this claim. While they will claim that the paper by Kellermann at al  that found an association between gun ownership and homicide should never have been published because it didn't control for any other factors. (When in fact it controlled for dozens of other factors.)
Anyway, Friedman's case that Teret has a double standard is based on Teret's sympathetic comments an a study by Wintemute et al  that found that criminal activity was associated with a preference for the purchase of small, inexpensive handguns. Friedman argues that the Wintemute study is markedly inferior to Lott's work because:
However, things are not as clear cut as Friedman believes. Firstly, there is one important way that Wintemute's study is superior--it is at an individual level rather than aggregating things into counties as Lott does. This is better since there is no reason to expect every part of a county to be the same. Secondly, one can well argue with the reason he gives:
Nonetheless, although it is debatable whether Lott's paper is markedly better in some sense, it doesn't seem to be true that it is markedly worse. Hence it seems probable that Teret is operating a double standard.
We could perhaps find a better example of a double standard if we looked at a study that had a similar design to Lott's. A study by Cummings et al  used a pooled time series design similar to Lott's to study the effect of laws that make gun owners criminally liable if someone is injured because a child gains unsupervised access to a gun. They found that the laws were associated with a 23% reduction in unintentional shooting deaths of children.
Now Steve Milloy's Junk Science site criticizes this study. Here's what Milloy says:
This was an ecologic epidemiology study, meaning the conclusion is based on very "macro" comparisons of groups of people. The study involved no data about individuals, just groups. Traditionally, these studies are only useful for forming hypotheses for further testing, not irrefutable facts.
In particular, no data was collected on compliance with these laws and the relationship of compliance to the decrease in injuries. There may have been fewer unintentional firearm-related injuries in states with safe storage laws, but this study assumed compliance with the laws and assumed that compliance is responsible for the decrease in injuries. A big assumption considering the result.
The reported 23% decrease in injuries is a pretty weak result-probably beyond the capability of the ecologic type of study to reliably detect. Even in the better types of epidemiology studies (i.e., cohort and case-control), rate increases of less than 100% (and rate decreases of less than 50%) are very suspect.
So how much stock can be put in a weak result based on inadequate data?
Now this criticism applies equally to Lott's study, only more so, since the crime decreases found by Lott were much less than 23%. (For the bit that reads ``assumed compliance with the laws'' you need to read ``assumed frequent encounters between criminals and permit holders''.)
Furthermore, elsewhere on his site Milloy gives us six tips on how to spot junk science
So what does Milloy say about Lott's study? Do you think he condemns it as ``a weak result based on inadequate data''? Does he inform visitors to his site that it is junk science? Follow this link to find out.