A reader who prefers to remain anonymous writes:
One of the areas where the pro and anti gun sides agree in many cases is the presumption that somehow mental health remedies can produce fewer deaths and injuries that result from firearms, especially mass shootings (events where large numbers of victims are involved, but the term generally applies to as few as three individuals involved). Both sides of the argument about gun rights presume that at least some answers somehow lie in “improved mental health funding and treatment.”
Much of the underpinning of this presumption relies on research and publication of articles in psychology journals and scholarly papers. But what makes anyone think these studies and papers present valid analysis and not merely opinion?
In 2015, Nature magazine published an article declaring that the results in over half of published psychology-related studies couldn’t be reproduced. “Reproducibility” is a hallmark of scientific inquiry because if a proclaimed discovery cannot be replicated, the level of skepticism regarding the validity of the “discovery” rises dramatically.
In the biggest project of its kind, Brian Nosek, a social psychologist and head of the Center for Open Science in Charlottesville, Virginia, and 269 co-authors repeated work reported in 98 original papers from three psychology journals, to see if they independently came up with the same results.
The studies they took on ranged from whether expressing insecurities perpetuates them to differences in how children and adults respond to fear stimuli, to effective ways to teach arithmetic.
Among the findings of the reproducibility study are the following:
- Only 39 of 100 replication attempts were successful
- 97% of the original research/study found significant effect
- 36% of replications found significant effect
- Average size of effect in reproductions was half that reported by the original research
The point, says Nosek, is not to critique individual papers but to gauge just how much bias drives publication in psychology. For instance, boring but accurate studies may never get published, or researchers may achieve intriguing results less by documenting true effects than by hitting the statistical jackpot; finding a significant result by sheer luck or trying various analytical methods until something pans out.
The report has some very interesting observations about the reliability of such research. Given the questionable nature of the research, we should be very careful about signing on to anti-gun activists’ proposals for so-called red flag laws or mental health screening and testing requirements for new or continued gun ownership.