Drawing the wrong conclusions with way too much confidence To test the ability of scientists in cognitive psychology, the UMass Amherst investigators sent the same data to 27 teams of researchers in cognitive psychology, to test how they made theoretical inferences. Those expert teams drew conclusions from identical data that varied all the way from zero to 100 percent. One of the research team described it as a “jaw dropping” result – where only one third of the experts made the correct inferences about what that data meant. Two thirds of the experts were either totally wrong or just operating a bit better than pure guessing.
Rotello reports that about one-third of responders "seemed to be doing OK," one-third did a bit better than pure guessing, and one-third "made misleading conclusions." She adds, "Our jaws dropped when we saw that. How is it that researchers who have used these tools for years could come to completely different conclusions about what's going on?"
For the past decade, social scientists have been unpacking a 'replication crisis' that has revealed how findings of an alarming number of scientific studies are difficult or impossible to repeat. Efforts are underway to improve the reliability of findings, but cognitive psychology researchers say that not enough attention has been paid to the validity of theoretical inferences made from research findings.
Our results reveal substantial variability in experts' judgments on the very same data.
They were mainly interested in the reported probability that memory strength was manipulated between the two conditions. What they found was "enormous variability between researchers in what they inferred from the same sets of data," Starns says. "For most data sets, the answers ranged from 0 to 100 percent across the 27 responders," he adds, "that was the most shocking."
https://www.sciencedaily.com/releases/2019/10/191010161540.htm