12 reasons scholars might cite: citation motivations

I’m sure I read something similar about this once,  and then couldn’t find it again lately… so here is my quick list of reasons why researchers might cite. It includes “good” and “bad” motivations, and might be useful when considering bibliometric indicators. Feel free to comment on this post and suggest more possible motivations. Or indeed any good sources!

  1. Set own work in context
  2. Pay homage to experts
  3. Give credit to peers
  4. Criticise/correct previous work (own or others)
  5. Signposting under noticed work
  6. Provide further background reading
  7. Lend weight to own claims
  8. Self citations to boost own bibliometric scores and/or signpost own work
  9. Boost citations of others as part of an agreement
  10. Gain favour with journal editor or possible peer reviewers by citing their work
  11. Gain favour by citing other papers in the journal of choice for publication
  12. Demonstrate own wide reading/knowledge

Is this research article any good? Clues when crossing disciplines and asking new contacts.

As a reader, you know whether a journal article is good or not by any number of signs. Within your own field of expertise, you know quality research when you see it: you know, because you have done research yourself and you have read & learnt lots about others’ research. But what about when it’s not in your field of expertise?

Perhaps the most reliable marker of quality is, if the article has been recommended to you by an expert in the field. But if you find something intriguing for yourself that is outside of your usual discipline, how do you know if its any good? It’s a good idea to ask someone for advice, and if you know someone already then great, but if not then there’s a lot you can do for yourself, before you reach out for help, to ensure that you strike a good impression on a new contact.

Librarians teach information skills and we might suggest that you look for such clues as:

  1. relevance: skim the article: is it something that meets your need? – WHAT
  2. the author(s): do you know the name: is it someone whose work you value? If not, what can you quickly find out about them, eg other publications in their name or who funds their work: is there a likely bias to watch out for? – WHO & WHY 
  3. the journal title/publisher: do you already know that they usually publish high quality work? Is it peer reviewed and if so, how rigorously? What about the editorial board: any known names here? Does the journal have an impact factor? Where is it indexed: is it in the place(s) that you perform searches yourself? – WHERE 
  4. date of publication: is it something timely to your need? – WHEN
  5. references/citations: follow some: are they accurate and appropriate? When you skim read the item, is work from others properly attributed & referenced? – WHAT
  6. quality of presentation: is it well written/illustrated? Of course, absolute rubbish can be eloquently presented, and quality research badly written up. But if the creators deemed the output of high enough value for a polished effort, then maybe that’s a clue. – HOW
  7. metrics: has it been cited by an expert? Or by many people? Are many reading & downloading it? Have many tweeted or written about it (altmetrics tools can tell you this)? But you don’t always follow the crowd, do you? If you do, then you might miss a real gem, and isn’t your research a unique contribution?! – WHO

I usually quote Rudyard Kipling at this point:

I keep six honest serving-men
(They taught me all I knew);
Their names are What and Why and When
And How and Where and Who.

So far, so Library school 101. But how do you know if the research within is truly of high quality? If most published research findings are false, as John Ioannides describes, then how do you separate the good from the bad research?

An understanding of the discipline would undoubtedly help, and speed up your evaluation. But you can help yourself further, partly in the way you read the paper. There are some great pieces out there about how to read a scientific paper, eg from Natalia Rodriguez.

As I read something for the first time, I look at whether the article sets itself in the context of existing literature and research: Can you track and understand the connections? The second thing I would look at is the methodology/methods: have the right ones been used? Now this may be especially hard to tell if you’re not an expert in the field, so you have to get familiar with the methodology used in the study, and to think about how it applies to the problem being researched. Maybe coming from outside of the discipline will give you a fresh perspective. You could also consider the other methodologies that might have applied (a part of peer review, for many journals). I like the recommendation from Phil Davis in the Scholarly Kitchen that the methodology chosen for the study should be appropriate or persuasive.

If the chosen methodology just doesn’t make sense to you, then this is a good time to seek out someone with expertise in the discipline, for a further explanation. By now you will have an intelligent question to ask such a contact, and you will be able to demonstrate the depth of your own interest. How do you find a new contact in another discipline? I’ll plug Piirus here, whose blog I manage: it is designed to quickly help researchers find collaborators, so you could seek contacts & reading recommendations through Piirus. And just maybe, one day your fresh perspective and their expertise could lead to a really fruitful collaboration!

After the Frankfurt book fair: full of inspiration!

Photo of me ready to speak
Is the “Data-Librarian” the Future of Library Science?

Earlier this month I was lucky enough to attend the enormous, international Frankfurt book fair, as I was a panellist for Elsevier’s Hot Spot discussion on the future of library science and the data-librarian.  I highly recommend the opportunity & experience, as the Elsevier staff really looked after their speakers and I got to meet not only my fellow panellists but also some of the audience who came and introduced themselves at the “hot spot cafe” immediately after our discussion.



Photo of panellists & our moderator
Left to right: Noelle Gracy, Jenny Delasalle, Dr Schnelling, Prof. Dr. Petra Düren, Pascalia Boutsiouci

The session itself was filmed, and there was a professional photographer there (I have permission to use these official pictures), so I’m sure you’ll find out more about it over on Elsevier’s website: watch the LibraryConnect section! Our basic panel structure was that we were asked questions by Elsevier’s Noelle Gracy, which came from the community in advance.

What did we cover?

Well, I didn’t get to take notes as well as to talk(!) so I can tell you what I had prepared to say, and what I remember, one week after the event! Here are some nutshell points:

  • The future of library science encompasses more than just data librarianship, of course!
  • Librarians may find that adding skills with data to their CV opens up more job opportunities in the future.
  • Librarians offer a lot to the data community, not least their professional ethics & knowledge of legal expectations, which of course is covered in the module I teach to KCL/Humboldt University’s MA Digital Curation students.
Photo of me with microphone, discussing with fellow panellists
Getting to hear each other’s opinions

Librarians also have:

  • ability to describe items/create valuable metadata records
  • connections with all disciplines across campus (& library building is often central too)
  • experience of assessing quality and significance for collection management
  • skills in training & informing others
  • It’s certainly not all about technical skills: Dr Schnelling was very clear about that point, as I believe it was his question, about what skills future librarians need. But of course there are some technical skills that will help if you are working with data. Especially when considering preservation needs.
  • One easy way to begin familiarising yourself with data management issues, is to look at data management plans, and what they involve.

If you were there, then maybe you can share some more highlights of the talk by leaving a comment, below. I will also blog here again about some of my other top sights from the fair: after the talk, I went around many of the stalls, looking for things specifically German. Of course, it was an international fair, so I found an awful lot more. I will end here with a final photograph of the audience for our panel session. If you were there, then thanks for coming!

photo of audience looking at the Hot Spot stage
Standing room only!

Story telling and new ideas to listen to, for information professionals

When I’m just warming up of a morning, I like to listen to BBC Radio 4 podcasts. I’ve been picking my way through the series called Four Thought, where speakers share stories and ideas. There are three episodes in particular that I’d like to highlight for information professionals:

Maria Popova: The Architecture of Knowledge – a fascinating look at the way we handle information and create wisdom, incorporating views on knowledge from history but considering the modern, digital era of information overload. A great story!

Rupert Goodwins – tracks human behaviour on the Internet and considers: How can the Internet bring us together to discuss and share with each other in a respectful, reasoned way? How can we avoid arguments and incivility? The speaker has lots of experience and ideas.

This last talk is of interest because of the course I’ve been teaching at the Humboldt Uni IBI, on Information ethics. In the course, we explore all sorts of issues, including policies for websites that the students as information professionals of the future might play a part in hosting, and the ethical matters behind them, such as authenticity vs anonymity, moderating comments, handling whistleblowers, etc.

Another Four Thought that I found a little bit uncomfortable to listen to was:

Cindy Gallop: Embracing Zero Privacy – recommends taking control of your digital presence, and I agree with that. The speaker has some good ideas, chiefly that “we are what we do” in a very positive and empowering way, but what I find difficult is the notion that we can all live in such an open way. What about people who live in a society that is unaccepting of who they are?What about mistakes from the past, for which a debt has been paid: should they be laid forever bare? What about keeping a personal life personal, even whilst sharing matters of professional interest? On balance, I’m not a fan of zero privacy but this talk is a great opener for discussion.

There are plenty of other talks that provide food for thought in the Radio 4 podcast archives, on all sorts of topics and not only in the Four Thought series. I also like the Reith Lectures, the “Life Scientific”, and “In Our Time”… so much more to listen to!

Quality measurement: we need landscape-reading skills. 5 tips!


The academic publishing landscape is a shifting one. I like to watch the ALPSP awards, to see what’s happening in academic publishing, across the disciplines, and indeed to keep an eye on the e-learning sector. Features of the landscape are shifting under our feet in the digital age, so how can we find our way through them? I think that we need to be able to read the landscape itself. You can skip to the bottom of this post for my top tips, or read further for more explanation & links!

One of the frequent criticisms levelled at open access journals, has been that they were not all about high quality work. Indeed, with an incentive to haul in as many author payments as possible, a publisher might be tempted to lower the quality threshold and publish more articles. An article in the Guardian by Curt Rice, from two years ago explains some of this picture, and more.

However, quality control is something important to all journals, whether OA or not: in order to attract the best work, they have to publish it alongside similar quality articles. Journal and publisher brands matter. As new titles, often with new publishers, OA journals once needed to establish their quality brands: this is no longer the case for all OA journals. Andrew Bonamici wrote a nice blogpost on identifying the top OA journals in 2012.

And of course, OA journals, being new and innovative, have had the opportunity to experiment with peer review mechanisms. Peer review is the gold standard of quality filters for academic journals, as I explored in earlier blogposts.  So, messing with this is bound to lead to accusations of lowering the quality! But not all OA journals vary from the gold standard: many use peer review, just as traditional journals do.

In reality, peer review happens in different ways at different journals. It might be open, blind or double blind. It might be carried out by two or three reviewers, and an editor might or might not have a final decision. The editor might or might not mediate the comments as sent back to the author, in order to assist in the article’s polishing. The peer reviewers might get guidelines on what is expected of them, or not. There is a variety of practice in peer review, from one discipline to the next, and one publisher to the next, if not from one journal to the next. And as the Guardian article I mentioned earlier points out, dummy or spoof articles have been known to make it through peer review processes. So peer review itself is not always a garuantee of quality. Rather, is a sign to watch out for, in our landscape.

For some academic authors there are quality lists for their discipline, but how good are the lists? A recent article in the Times Higher by Dennis Tourish criticises the ABS guide to journal quality, which has often been used in business and management studies. Australia’s ERA once used journal rankings, but dropped them, as this article by Jill Rowbotham described.

Fortunately, academics know how to think for themselves. They know how to question what they find. They don’t always accept what they’re told! So, we librarians can tell them where to find such lists. We can show them how to look up a journal h-index or its impact factor, and we can explain what a cited half life is (I like Anne-Wil Harzing’s website for information on this). But, as with the traditional reference interview, the real skill for the author is in knowing what what you need.

There will always be a compromise: a slightly lower ranked journal that has a faster turnaround. A slower journal that has better peer review mechanisms for helping you to polish your work. The fast, innovative young journal that will market your work heavily. Not to mention the match of the subject of the article! There are many factors for the author to consider.

So how do we read the landscape? Here are my tips:

  1. We can take a look at the old guides, of course: the lists are not completely redundant but we need to question whether what we see matches what they describe.
  2. We can question whether a score or measure is for a characteristic that we value.
  3. We can talk to people who have been there before, i.e. experienced, published authors.
  4. We can tentatively scout ahead, and try a few things out with our most experimental work.
  5. We can scan the horizon, and watch what pioneers are doing: what works well there? As well as the sources I mention in my opening paragraph, I like to read the Scholarly Kitchen for horizon scanning.

Ultimately, we need to be alert, and to draw on all our knowledge and experiences, we need to be open and aware of our publishing needs. The best way to do this is to be a reader and consumer of published outputs in your discipline, and a member of the academic community. That way, you will know what your goal looks like, and you’ll recognise it when you see it, out there in the shifting sands of academia.

How do you assess the quality of recommendations?

I wrote here last year about the marvellous Fishscale of academicness, as a great way to teach students information literacy skills by starting with how evaluate what they’ve found.  I’m currently teaching information ethics to Masters students at Humboldt Uni, and this week’s theme is “Trust”: it touches on all sorts of interesting topics in this area, including recommendation systems, also known as recommendation engines.

An example of such a recommendation system in action would be the customer star ratings for products on Amazon, which are averaged out and may be used as a way to suggest further purchases to customers, amongst other information. Or reviews for hotels/cafes on Tripadvisor, film suggestions on Netflix, etc. Recommendations are everywhere these days: Facebook recommends apps you might like, and will suggest “people you may know” : LinkedIn and Twitter work in similar ways.

For me, these recommendations beg certain questions, which also turn up in debates about privacy and about altmetrics, such as:

How much information do you have to give them about yourself, do you trust them with it, and how good are their recommendations anyway? Are you happy to be influenced by what others have done/said online?

Recommendation systems use “relevance” algorithms, which are similar to those used when you perform a search. They might combine a number of factors, including:

  • Items you’ve already interacted with (i.e. suggesting similar items, called an item-to-item approach)
  • User-to-user: it finds people who are similar to you, eg they have displayed similar choices to you already, and suggests things based on their choices
  • Popularity of items (eg Facebook recommends apps to you depending on how much use they’ve had) Note that this may have to be balanced against novelty: new items will necessarily not have achieved high popularity.
  • Ratings from other users/customers (here, they might weight certain users’ scores more heavily, or average star ratings, or just preference items with a review)
  • Information that they already have about you, against a profile of what such a person might like (eg information gleaned from tracking you online through your browser or on your user profile on their site, or that you have given them in some way)

The sophistication of the algorithm used and the size of the data pool drawn on (or lack thereof) might also depend on the need for speed of the system.

Naturally, those working on recommendation engines have given quite a bit of consideration to how they might evaluate the recommendations given, as this paper from Microsoft discusses, in a relatively accessible way. It introduces many relevant concepts, such as the notion that recommending things that it knows you’ve already seen will increase your trust in the recommendations, although it is very difficult to measure trust in a test situation.

We see that human evaluation of these recommendation systems is important as “click through rate (CTR)” is so easily manipulated and inadequate as a measure of the usefulness of recommendations, as described and illustrated in this blog post by Edwin Chen.

Which recommendations do you value, and why? I also came across a review of movie recommendation sites from 2009, which explains why certain sites were preferred, which gives plenty of food for thought. From my reading and experience, I’d start my list of the kind of things that I’d like from recommendation systems with:

  • It doesn’t take information about me without asking me first (lots of sites now have to tell you about cookies, as the Cookie collective explain)
  • It uses a minimal amount of information that I’ve given it (and doesn’t link with other sites/services I’ve used, to either pull in or push out data about me, unless I tell it that it can!)
  • Suggestions are relevant to my original interest, but with the odd curveball thrown in, to support a more serendipitous discovery and to help me break out of the “filter bubble
  • Suggestions feature a review that was written by a person (in a language that I speak), so more than just a star rating
  • Suggestions are linked in a way that allows me to surf and explore further, eg filtering for items that match one particular characteristic that I like from the recommendation
  • I don’t want the suggestions to be too creepily accurate: I like to think I’ve made a discovery for myself, and I doubt the trustworthiness of a company that knows too much about me!

I’m sure there’s more, but I’m equally sure that we all want something slightly different from recommendation systems! My correspondence with Alke Groeppel-Wegener suggests that her students are very keen on relevance and not so interested in serendipity. For me, if that relevance comes at the expense of my privacy, so that I have to give the system lots of information about myself, then I definitely don’t want it. What about you?

12 Questions to ask, for basic clues on the quality of a journal

When choosing where to publish a journal article, what signs do you look out for? Here are some questions to ask or aspects to investigate, for clues.

1 – Is it peer reviewed? (Y/N and every nuance in between) See the journal’s website.
2- Who is involved in it? The editor & publisher? Are they well known & well thought of? Who has published articles there already: are these big players in your field? Read the journal!
3- Is it abstracted/indexed by one of the big sources in your field? (The journal’s website should tell you this. Big publishers also offer their own databases of house journals)
4- What happens when you search on Google for an article from the journal? Do you get the article in the top few results? And on GScholar?
5- Does it appear in Web of Science or Scopus journal rankings?
6- Take a look on COPAC: which big research libraries subscribe?
7- have a look at the UK’s published RAE2008 / forthcoming REF2014 data and see if articles from that journal were a part of the evidence submitted, and rated as 4*
8- Do the journal articles have DOIs? This is a really useful feature for promotion of your article, and it will mean that altmetric tools can provide you with evidence of engagement with your article.
9- Is there an open access option? (See SherpaRomeo) This is a requirement of many research funders, but it is also useful for you, when you want to promote your article.
10- Is it on the list of predatory OA journals? You might want to avoid those, although check for yourself. Note that some journals on the list are disputed/defended against the accusation of predation!
11- Is it listed on the ISSN centre’s ROAD: http://road.issn.org/ What does this tell you about it?
12- If you have access through a library subscription, is it listed on Ulrich’s periodicals directory? What does this tell you about it? Note the “peer review” symbol of a striped referee’s shirt: if the shirt is not there, it doesn’t necessarily mean that the journal is not peer reviewed: you may have to investigate further.
– What type of peer review is used? Is it rigorous? Is it useful to you, even if you get rejected?
– Time to rejection/acceptance: how soon do you need to be published?
– Acceptance/rejection rate
– Journal Impact Factor/ SJR score(s) /quartile for the field