By Kyle in CT

A funny thing happened on the way to the forum . . . more specifically the Harvard Injury Control Research Center website. I figured out why no one can seem to agree on “The Facts” about guns and gun violence; most of the analysis done on gun control (pro or anti) is probably wrong. Or at least runs afoul of Mark Twain’s feelings on math . . .

First of all, let’s do some housekeeping. In researching this topic, I ran across a few posts by Brad Kozak that went down a similar path Twain-wise; I guess it’s tough to beat the Great American Novelist when it comes to quotation.  More importantly though, this piece aims to do something a bit different than a normal statistics piece on TTAG. I’m not going to go after the most recent survey showing gun ownership is falling, or people’s changing views on gun control. Instead, I’m more interested in the questions of why and how; why is it that we have so many firearm statistics that seem so wildly off the mark? How can it be that different studies by apparently reputable organizations can come to such dramatically different conclusions?  Brad hinted at this when he wrote:

There is data on a wide variety of gun-related behaviors.  But any rational person has to consider these variables (as well as all the other variable[s]):

·      What are the sources for the data?
·      Are the data sources reliable and accurate?
·      Who compiles the data?
·      What methodology is used to interpret the data?
·      Who’s doing the interpreting?
·      What skin does the interpreter have in the game?
·      Who’s reporting on the interpretation?
·      What’s the bias (implicit or explicit) of the reporters?

Of interest in this piece are the first two points: where does the data come from, and are those sources reliable.  So without further ado, let’s dive in.

In thinking about gun control, or really any societal issue, the perennial problem is data.  No matter whom you talk to, no matter what their politics are, statistics come out of left field and are bandied around to prove a point.  The fundamental question then becomes, is any of it right?

To start getting at the answer, let’s look at what the folks at Harvard Injury Control have to say about gun ownership.  If we look at the first paper listed under the gun ownership heading, we find “The US gun stock: results from the 2004 national firearms survey”.  If you scroll down the page, you find that their analysis is based upon the General Social Survey, which in the author’s words, “The General Social Survey (GSS), a biannual survey of the US civilian population, has tracked household and personal firearm ownership over the past two and a half decades.”  So, good place to start right?   I mean, gun ownership is a yes or no question, the results should be clear as long as the sample size is large enough (for further reading on this last statement, go go gadget Central Limit Theorem . . . ).

Well, maybe. Whatever their subsequent analysis might be – and I haven’t checked that their 2770 randomly selected adults is actually a sufficient sample size – surveys have a huge, inherent flaw: they rely on people.  Now the obvious problem with people is that they are biased in the colloquial sense of the term, meaning that they are predisposed to one result over another.  TTAG has been down that road before, I don’t need to add to it.  Much more interesting, at least to me, is how the nature of surveys and the nature of human beings can interact to push the results one way or the other, even without someone resting their thumb on the scale.  In other words, I am interested in survey biases that would show up even if a totally neutral party conducts it.  To get at that, you have to go outside the gun research domain, into something a little less contentious:  health and healthcare . . .

For those of you who just came back from a laughter spit-take and the inevitable keyboard clean-up, welcome back.

Even a cursory search through Google Scholar brings up studies such as “Undereating and underrecording of habitual food intake in obese men: selective underreporting of fat intake”, which indicated that the subjects in this study not only underreported their own energy intake by 37%, but also changed their actual eating habits between recording and non-recording weeks.  This is hugely important because it shows that people are terrible at judging their own behavior.  Whether its drinking, texting while driving, eating, or any other behavior, people are terrible at accurately evaluating themselves.  This presents a huge problem because it means that if you simply ask people a question, the answers you get will often not represent the truth.  It’s not in most people’s nature to self-reflect, and when you ask them to do so, the result is wonky data.

If you continue to let your fingers do the walking, you quickly come up with more complications such as, “Coverage Bias in Traditional Telephone Surveys of Low-Income and Young Adults”.  This boring-sounding name belies a very interesting point; many people, particularly younger people and lower income individuals, aren’t using landlines any more, and most traditional telephone surveys use landline-only random-digit dialing.  This is a problem because the basic assumption underlying the entire field of statistics is that your sample is representative of the larger population you are trying to describe (remember that Central Limit Theorem?).  If this is not the case, every subsequent analysis that you complete will be subject to the same bias that was present in your original data.  Pretty bad huh?  Wait, it gets worse.  If you keep digging you find papers like, “Non-response bias in a lifestyle survey”, where the authors discuss the effect of non-responders on the data set, concluding that “Lifestyle questionnaire surveys need to include an assessment of the non-response bias.”  What this means is that, before you even get into question wording, before you even get to ask a question, your sample is likely contaminated.  Your sample, which you are assuming from the outset is representative of the total population, is deviating from this assumption without even asking any questions.

Getting back to the land of the Crimson Veritas, how do these biases influence the gun research at the Harvard Injury Control Research Center?  Well, large swaths of their data are based upon the GSS, which according to the National Opinion Research Center collects data like this:

GSS questions include such items as national spending priorities, marijuana use, crime and punishment, race relations, quality of life, and confidence in institutions. Since 1988, the GSS has also collected data on sexual behavior including number of sex partners, frequency of intercourse, extramarital relationships, and sex with prostitutes.

The GSS is conducted as an in-person interview, but it will still be prone to non-responders (lots of people have better things to do) as well as inaccurate individual reporting.  I mean honestly, how many people do you think are accurately reporting to some stranger that they’re getting some on the side?  More germane to TTAG, how many people are going to tell a complete stranger the number, kind, and uses for the firearms in their home?  People lie, sometimes for understandable reasons, and sometimes just for kicks.  In high school, we had to fill out drug surveys so the school system could evaluate the prevalence of drug use.  An appreciable number of my classmates filled out that they had tried just about every drug known to mankind. How is a researcher to know what data is real and what is simply people messing with the survey?  Well, they don’t, so they make an educated guess, throw out data points that don’t make sense, and hope it’s right.  Or alternatively, they keep all the data, errors and all.  Either way you can get significant errors.

So what does this all mean for gun research?  Fortunately or unfortunately (depending on how you want to look at it) it means that in all likelihood, every survey you’ve ever read about guns (pro or anti) has had fundamental flaws and inaccuracies that cannot be accounted for and cannot be readily quantified.  In the space of the last few minutes, we’ve shown that:

  • People don’t do a good job of evaluating their own behavior
  • People’s behavior can and will change when they know they are being observed
  • Surveys rely on the assumption that a smaller sample is representative of a complete population (like the U.S.)
  • The method of acquiring survey data can significantly change the results of the survey, even setting aside question wording and order
  • People lie (duh)

At the most basic level, collecting data by asking people about sensitive or controversial topics – like gun ownership and gun policy – is a bad way to do business.  The flaws are not just about biased interviewers or questions, but the very process of surveying itself.  Make a survey too long, contact the interviewees by phone versus in-person versus by mail, time the survey to coincide with a news event, and you can get wildly different answers.  More importantly, you can get wildly different answers without ever showing explicit bias in the question wording or question order.  Statisticians try to account for these errors, but the deep dark secret of statistics is that corrections are, at best, educated guesses, and at worst, biased decisions designed to push the needle one way or the other.

There is a solution though, even if it’s not all that satisfying: stick to physical things that can be counted.   Look at homicides, suicides, population size, gun sales, crime rates, etc., and look at them in context.  Case in point, Nick Leghorn’s recent post about MDA’s “gun facts”.  Even “hard” numbers (crime stats can be messed with too, just ask the Brits), taken out of context, become meaningless.  But these are third-party-verifiable things that happen in the real world, and if you are going to make gun policy, they are the only numbers that really matter.  Statistics can be a useful tool, but the more you use it, the more you obscure the nature of the underlying data.  As the Great American Novelist once said, and the TTAG readership is well aware, “A lie can travel half way around the world while the truth is putting on its shoes.”  I guess that means we should leave our boots on.

28 COMMENTS

  1. The question dictates the answer. Or at least frames it. Part of study design is defining the parameter being measured and studied. Often de novo. With that come inclusion and exclusion criteria. All these elements allow for vast numbers of permutations. And outcomess….

    • As mentioned, crime statistics can be very tricky as well. Definitions are important. As an example, a number that is often cited is that murders most often occur between people that know each other. What is left unsaid is that the number from the FBI is based on the cases where the relationship is known.

      The relationship is unknown in 44% of the cases!

      The unstated assumption is that the percentage of homicides of people who know each other is the same in the unknown cases as it is in the known cases.

      http://gunwatch.blogspot.com/2013/02/the-misleading-murderer-that-you-know.html

      • Note that “persons known to each other” includes criminals disputing over the distribution of the loot, prostitutes and johns disagreeing over payment, drug dealers and buyers disagreeing over payment, etc. Don’t let the FBIs terminology lure you into thinking they’re talking about family members shooting each other.

  2. Here is a rather straightforward, bottom-line Gallup survey tracking yes or no on making handguns illegal for non-LEO civilians:

    http://www.volokh.com/2013/11/23/attitudes-towards-handgun-bans/

    along with some interesting comments.

    With clean results like this, it’s no wonder that the intellectually honest, unbiased gun-grabbers turn to obscure, ambiguous, distortable statistics such as “gun ownership” to support their point. Next comes tree rings and phases of the moon.

  3. I have never put ANY stock in polling data. I remember the first day of my college statistics class, my professor walked into the classroom and the first thing out of her mouth was “Numbers never lie, but statistics are pathological liars that change their story depending on who is compiling them.”

    • My Stats Prof would end each lecture with a “Damned Lies” example of how statistics are used to misrepresent. One was called , Stretch your point.
      If you wanted to show a historical trend say in prices, and show stability, you would stretch the horizontal and squash the vertical to flatten the curve. Same data could show visually extreme volatility by stretching the vertical and copmpressing the horizontal so that it would look like a giant saw blade.
      Can’t remember how to calculate Chi Squared but his Damned Lies are still with me.

    • Sigh…this was written by Kyle in CT…very first line of the post. Dan was merely the one who posted it to the website, not the author.

  4. Luke the 40% of guns bought with out nice check, based in a survey held partially before the Brady Bill was signed into law.

  5. If someone called my phone and asked if I owned firearms, I would say NO regardless of the truth. I know who you CLAIM to be, but I don’t know who you really are and how the data is going to be used.

    • This. If someone calls asking me any information for a survey I either hang up or give them the most outrageous answers possible. For example, someone called running a survey on the preparedness of people when it comes to hurricanes. I told her I was fine since I had a years supply of gummy bears and condoms.

      • What is your plan for rotating your stock of gummy bears? Do you eat some every month and replace throughout the year? Or do you have like a big gummy bear party at the end of Hurricane season and blow out the stock all at once?

  6. If you go to their “self-defense gun use page”, all of the quotes are from the same biased author. I especially love this one:

    “7. Adolescents are far more likely to be threatened with a gun than to use one in self-defense.”

    Well no shit, they aren’t allowed to own handguns(period) and two, they aren’t allowed to buy them either. The only way to get one is from their parents which is usually a hunting rifle. Can’t exactly carry that concealed. So yes, a relatively unarmed portion of the population will more likely be threatened with a gun rather than use one for self defense.

    They also don’t seem to count brandishing as a self-defense use of firearms. Otherwise they would ask inmates if they had ever run away from someone with a gun. I’m sure that answer would be pretty high.

  7. This is exactly why I don’t rely on polls, or even most so-called “peer-reviewed studies” on any-damn-thing whatsoever.

    What we as the armed intelligentsia need to know — ALL we need to know in fact — is that our natural, civil, and Constitutionally-affirmed and protected rights including the right to keep AND bear arms, are not based on “need” and nor or they based on social utility.

    That’s it.

    End of story, really.

    • I’m certain the writers of that Amendment had no need for a poll to reach their conclusions and consensus.

  8. Why hasnt any of the TTAG people taken a survey at a local gun show and asked the people coming in “Would they answer yes or no to an anonymous Gallup poll question whether they had guns in the house”.

    Who here wants to bet the ratio would be (8 to 2) or (9 to 1) would lie and say no?

    Do a series of such polls and use their own tactics against the anti gun nuts!

    • Pretty sure that activity will attract insurance companies to gun shows. I swear, as gun owners we sure seem to have an absurd rate of maritime disasters amongst us.

  9. My B.S. in psychology has gotten me exactly that in this world: B.S.

    BUT, I can thank my college education for making me aware of exactly this kind of behind-the-scenes mischief that goes on whenever ANYONE says ANYTHING to me about ANYTHING.

    • Somewhat off topic, but to your comment, it’s amazing how many degrees are utterly worthless.

      I didn’t start college until I was almost 30, after the military, family, and a successful career in the electrical industry.

      Now, two years into college, the only thing more shocking than the level of useless classes and degrees is the level of young people who think completing those useless classes and degrees will make them money, and lots of money at that.

  10. Why are there so many B.S. statistics about guns floating around? The reason is simple. It’s the same reason there are so many cars on our roads: because there’s an entire industry dedicated to manufacturing them.

  11. Time for a POLL REVOLT!
    Refuse to participate in any poll (except the most important one on election day), rendering their already fatally flawed work even more useless.

  12. My Stats Prof would end each lecture with a “Damned Lies” example of how statistics are used to misrepresent. One was called , Stretch your point.
    If you wanted to show a historical trend say in prices, and show stability, you would stretch the horizontal and squash the vertical to flatten the curve. Same data could show visually extreme volatility by stretching the vertical and copmpressing the horizontal so that it would look like a giant saw blade.
    Can’t remember how to calculate Chi Squared but his Damned Lies are still with me.

  13. “More germane to TTAG, how many people are going to tell a complete stranger the number, kind, and uses for the firearms in their home? People lie, sometimes for understandable reasons, and sometimes just for kicks.”

    Got that right. And some of us feel that it is our moral duty to LIE to pollsters, in hopes of putting them out of business. Just imagine a world without polls – where politicians tell you what they actually believe, rather than what they think you want to hear. Okay, that gets into Fantasyland.

Comments are closed.