Whilst knitting (so I make no claim to have comprehensive notes), I watched a great talk on YouTube by the guy who wrote this paper:
Why most published research findings are false
in PLoS Med (2005)
– Hedge fund managers don’t trust science: how do we know which science can be trusted?
– Looks like replication is an important aspect of science, for us to recognise quality
– Negative results should also be shared and lead to acknowledgement of contribution: there is a particular bias towards reporting of positive results in some disciplines. I think he said: “the analysis planned is different to the analysis published about half of the time” amongst the 60 or so research teams who responded to the author by sending requested protocols. And those who responded must presumably be amongst the most conscientious of researchers: the implication is that those who didn’t respond might publish analysis that is not what was originally planned, in more cases.
– Published articles should have published protocols associated with them, and there are a number of top journals who have now agreed that a condition of publication for articles about randomised control trials should have those trials registered already, before publication.
– Journals might have policies (I think: is this a sign of quality for authors choosing where to publish?), but are they always being adhered to? Not necessarily!
– When small studies’ results are published, the sensible thing is to wait for a larger study to confirm the findings.
– Transparency of data is important too. It sounded like he summarised a study where some top researchers tried to re-do the analysis in 18 papers from a top journal, and they could only replicate the results properly in two articles. There were various problems with the others, which ranged from a lack of availability of the data, through use of home-made and unavailable software, to an un-interpretable description of the methods.
– There are five levels for making research more open and more replicable (and thus more validatable?):
- Registration of data
- Registration of protocols
- Registration of analysis plan
- Registration of analysis plan and raw data together
- open live streaming
My reflection on it all was that my very act of knitting is a metaphor or even example for all of these themes, as my knitting is a form of replication. The knitting pattern was available for download for free on the knitters’ community site Ravelry, which is like open access publication, although you can buy individual patterns there too, and there’s frustrating, out of print stuff from books and magzines, too! Also, on Ravelry you can see pictures and notes from others’ projects that use the designs. This is partly replication, but also open, post publication peer review, as the project notes sometimes point out errors in the instructions. Sometimes, designers then admit to errors and release new versions. It’s also apparent that some designers have already engaged test knitters to try to avoid such a post-publication revision (pre-publication peer review). Some test-knitters might be paid, some are doing a favour for a friend, and some seem to do it for the wool!
I had difficulty interpreting my pattern in one or two places (perhaps because I was watching a fascinating video at the same time!), and had to fall back on my experience/expertise/creativity.* But finally, I was able to produce a very nice little top, and is that not a form of replication that indicates the quality of the original designer’s work?
* I was using a lace yarn for a top that was designed for worsted yarn, and my gauge with 5mm needles was close but not perfect, so I was destined for a few modifications. I think that this is somewhat akin to data adjustment! And if it was a really negative result, I could list it on Ravelry as an “Ugh”, so I maybe I should suggest that Nature and Science start publishing “Ugh”s, asap!
Here’s a picture of what I knitted: