How journals could “add value”

Some great ideas of what academic authors & readers can ask publishers for. Especially regarding handling & presenting data.

Advertisements

Working from home works for me!

I already blogged the things I like about working from home… so here is its opposite. Four things that are not so great and how I overcome them.

  1. It can be lonely: telephone and videoconferencing help to overcome this, but really, loneliness isn’t something I struggle with. All the e-mail interactions help, too.
  2. I lack a change of scenery in my day. But when I get a change, it really helps: it’s amazing how much a little lunch time walk can lift my spirits and inspire me.
  3. It’s annoying sometimes when the weather is great and I don’t even get a commute in it. I can always just step out onto the balcony for a breath of fresh air, though. That is better than working in most offices!
  4. I have to cook and wash up for myself at tea/lunch time… as well as all the breakfast and dinner things…

I got 7 advantages and only 4 disadvantages, so there’s proof that it works for me.

Quality measurement: we need landscape-reading skills. 5 tips!

OLYMPUS DIGITAL CAMERA

The academic publishing landscape is a shifting one. I like to watch the ALPSP awards, to see what’s happening in academic publishing, across the disciplines, and indeed to keep an eye on the e-learning sector. Features of the landscape are shifting under our feet in the digital age, so how can we find our way through them? I think that we need to be able to read the landscape itself. You can skip to the bottom of this post for my top tips, or read further for more explanation & links!

One of the frequent criticisms levelled at open access journals, has been that they were not all about high quality work. Indeed, with an incentive to haul in as many author payments as possible, a publisher might be tempted to lower the quality threshold and publish more articles. An article in the Guardian by Curt Rice, from two years ago explains some of this picture, and more.

However, quality control is something important to all journals, whether OA or not: in order to attract the best work, they have to publish it alongside similar quality articles. Journal and publisher brands matter. As new titles, often with new publishers, OA journals once needed to establish their quality brands: this is no longer the case for all OA journals. Andrew Bonamici wrote a nice blogpost on identifying the top OA journals in 2012.

And of course, OA journals, being new and innovative, have had the opportunity to experiment with peer review mechanisms. Peer review is the gold standard of quality filters for academic journals, as I explored in earlier blogposts.  So, messing with this is bound to lead to accusations of lowering the quality! But not all OA journals vary from the gold standard: many use peer review, just as traditional journals do.

In reality, peer review happens in different ways at different journals. It might be open, blind or double blind. It might be carried out by two or three reviewers, and an editor might or might not have a final decision. The editor might or might not mediate the comments as sent back to the author, in order to assist in the article’s polishing. The peer reviewers might get guidelines on what is expected of them, or not. There is a variety of practice in peer review, from one discipline to the next, and one publisher to the next, if not from one journal to the next. And as the Guardian article I mentioned earlier points out, dummy or spoof articles have been known to make it through peer review processes. So peer review itself is not always a garuantee of quality. Rather, is a sign to watch out for, in our landscape.

For some academic authors there are quality lists for their discipline, but how good are the lists? A recent article in the Times Higher by Dennis Tourish criticises the ABS guide to journal quality, which has often been used in business and management studies. Australia’s ERA once used journal rankings, but dropped them, as this article by Jill Rowbotham described.

Fortunately, academics know how to think for themselves. They know how to question what they find. They don’t always accept what they’re told! So, we librarians can tell them where to find such lists. We can show them how to look up a journal h-index or its impact factor, and we can explain what a cited half life is (I like Anne-Wil Harzing’s website for information on this). But, as with the traditional reference interview, the real skill for the author is in knowing what what you need.

There will always be a compromise: a slightly lower ranked journal that has a faster turnaround. A slower journal that has better peer review mechanisms for helping you to polish your work. The fast, innovative young journal that will market your work heavily. Not to mention the match of the subject of the article! There are many factors for the author to consider.

So how do we read the landscape? Here are my tips:

  1. We can take a look at the old guides, of course: the lists are not completely redundant but we need to question whether what we see matches what they describe.
  2. We can question whether a score or measure is for a characteristic that we value.
  3. We can talk to people who have been there before, i.e. experienced, published authors.
  4. We can tentatively scout ahead, and try a few things out with our most experimental work.
  5. We can scan the horizon, and watch what pioneers are doing: what works well there? As well as the sources I mention in my opening paragraph, I like to read the Scholarly Kitchen for horizon scanning.

Ultimately, we need to be alert, and to draw on all our knowledge and experiences, we need to be open and aware of our publishing needs. The best way to do this is to be a reader and consumer of published outputs in your discipline, and a member of the academic community. That way, you will know what your goal looks like, and you’ll recognise it when you see it, out there in the shifting sands of academia.