Here is an article about current methodological concerns with how psychology is researched.
In this article, we accomplish two things. First, we show that despite empirical psychologists’ nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process.
Click here for the full article.
When I was doing my Masters, my supervisor sat me down and told me that we don’t ‘prove’ anything with our work. We don’t seek truth, only probabilities. This came as a surprise to me as I was completely new to research. The more I think about it, the more true it seems and it should be since truth is such an absolute word and science should never stop questioning itself.
The next surprise was less pleasant. My other supervisor asked me to draw a story around my work in the last eight months and make it sound like I had always intended to find what I found. Truth was, I was just getting trained in various methodologies using a topic which interested me as the context for this. Most days of my work, I hardly thought about the larger questions. I was too busy figuring out each software, debugging. So writing up like I was a visionary felt like a lie. This still goes on, mind you, and I feel dishonest every time I rewrite my introduction to suit my results. Papers and theses don’t leave much room to document the struggles and all the trains of thought we have pursued to get to that point. And very few journals value null results because they are considered failures and bad for our reputations and therefore future funding. How many years of effort and how much money has gone into trying things multiple times that no one else has reported as something as a dead end?
Since these two revelations I’ve encountered several more about academia which makes me wonder what we are really a part of.
PhD comics takes on a more humorous angle on these issues, and several blogs deal with more real stories of fraud and frustrations with academia. Issues like open access, importance of citation and impact factor, quality of publications due to pressure to publish, and how the spirit of research can survive and stay true through all this need more discussion. I hope, in time to develop this into a multi-author blog so we can hear from people from various academic backgrounds and levels of experience. If you’re interested in contributing, I’d like to hear from you!