Choosing where to publish: not only journals

a wrench applied to a nail, and a hammer applied to a screw
Find the right tool for the job!

There are many factors that scholars will want to take into account, when choosing where they’d like to be published. I’ve blogged a simple list in the past, of 12 questions to ask when assessing the quality of a journal, but I want to provide a lot more detail (including a look at the impact factor which I deliberately left out in my questions – coming soon!). So I’m building a little series here, starting with a look at some alternatives to the journal article. Just because you have something to say or share with the scholarly community, doesn’t always mean that you have a journal article.

Co-authors: who chooses?

I have seen a tweet from an established academic who said that since he’d got tenure, the un-tenured co-authors on his papers got final say in where their articles were published. (Sorry, I didn’t record the tweet – bad librarian!) That sounds rather chivalrous: early career researchers have a very urgent need to build up their publication lists in a strategic way, while the senior academics might have different agendas.

I also know anecdotally that for many researchers, the opposite is true, and the senior authors choose. If there is even a little bit of influence that an early career researcher (ECR) can exert, then no doubt that ECR will want to make such influence count. So let’s start looking at the factors that could be of interest.

Is a journal article even the right output?

Maybe you’re not sure if you’ve got a scholarly journal article in the pipeline. Or maybe you’ve already got a journal article out and just have a little bit more to add to what you said back then: these channels are not always mutually exclusive, so it’s not always a case of “either/or”, but you may need to be careful of copyright.  With the right author agreements between you and your publisher, you could use many channels for the same piece of research, depending on which audiences you want to reach. My list is not comprehensive but it’s designed to give you ideas for other valuable communication channels.

Ten other places to consider

  1. Conference papers – this is a fairly traditional route to sharing research with other scholars, and some conference proceedings are just like journals. There are disciplinary differences: some disciplines take already published research to conferences, while others take unpublished ideas to conferences and use the conference rather like a first round of peer review, polishing the work afterwards for journal publication. There are many types of conference and they need evaluating. I’ve blogged about choosing conferences before.
  2. Poster /Infographic – posters might be presented at a conference, and could perhaps incorporate or indeed be an infographic, could be more widely shared online, for example embedded into a blogpost or on Instagram.
  3. Books – there are many options here, from the academic monograph to popular non-fiction and indeed fiction itself, which could be based on real science. Not forgetting the vital textbook for your field, but the key here is to think of who your audience is, and the appropriate type of book will become apparent. There are many pitfalls on the monograph route, but you can read of 7 mistakes from Laura Portwood-Stacer, who has been there & done it. And I found a very comprehensive look at self publishing for academics.
  4. Book chapters – maybe you’ve only got one chapter but you could draw on contributions from others, and so could pull together an edited book. This isn’t easy but I found some sensible advice on managing authors. Or perhaps you could keep your eye out for a “call for contributions” from other editors. Pat Thomson outlined the different work that a book chapter does, compared to a journal article.
  5. Guest blogposts – as a guest on someone else’s blog, your content might get a polishing by them, and you benefit from all the work they do to bring audience to your work. You might need to convince successful blogs as to why they should use your post though so I found a great blogpost on what makes a good guest blogger.
  6. Your own blog – this could be all your own work, or a group blog if you have a natural team to contribute to it. Emma Cragg has good advice on starting a blog. And I’ve also written about closing a blog, in case it’s a short-term undertaking for you!
  7. Data deposit – sometimes you have to do this anyway, alongside your journal article but it could be that your data can be deposited without the article. Here there are enormous disciplinary differences, but it’s worth noting that data can be cited.
  8. Practitioner journals – this is a great way to share your research findings among a community where it can have real world impact. Look out for professional associations linked to your field: they may have suitable publications.
  9. Slidedeck / teaching materials – if you’re at an institution where research-led teaching is expected, then maybe research findings can be incorporated into teaching materials – and perhaps shared in a learning objects repository or slidedeck sharing site.
  10. Wikipedia entry – you could become one of the many participants of the digital commons, and share your expertise through Wikipedia.

Having explored these alternatives, maybe you’re sure that you really do have a journal article. Or maybe you would prefer to use one of these channels, but your research funder or institute is only interested in journal articles. So my next post will start to look at aspects of journals that you can evaluate.

Image credit: CC0, via Pixabay

Advertisements

How do researchers share articles? Some useful links

This is a topic that interests me: how do researchers choose what to read? Where are the readers on our platforms coming from, when we can’t track a source URL? What are researchers doing in collaboration spaces? (Research processes are changing fast in the Internet era.) Is journal article sharing that is taking place legal and/or ethical? I’m a big fan of Carol Tenopir‘s work investigating readers’ behaviours and I think there’s much to learn in this area. Sharing an article does not equate to it having been read, but it is a very interesting part of the puzzle of understanding scholarly communication.

16649920968_f671108c56_z

Usage is something that altmetrics are displaying (the altmetric.com donut has a section for “Readers” which incorporates information from Mendeley), and it’s just possible that usage would become a score to rival the impact factor, when evaluating journals. It does often seem to me like we’re on a quest for a mythical holy grail, when evaluating journals and criticising the impact factor!

Anyway, what can we know about article sharing? In my last blogpost I highlighted BrightTALK as a way to keep up to date with library themes. The LibraryConnect channel features many useful webinars & presentations (yes, I spoke at one of them), and I recently listened to a webinar on the theme of this blogpost’s title, which went live in December 2015. My notes & related links:

Suzie Allard of the University of Tennessee (colleague of Carol Tenopir) spoke about the “Beyond Downloads” project and their survey’s main takeaways. These include that nearly 74% of authors preferred email as a method of sharing articles. Authors may share articles to aid scientific discovery in general, to promote their own work, or indeed for other reasons, nicely illustrated in an infographic on this theme!

Lorraine Estelle of Project COUNTER spoke about the need for comprehensive and reliable data, and to describe just how difficult it is to gather such data. (I can see that tracking everyone’s emails won’t go down well!) There are obviously disciplinary and demographic differences in the way that articles are shared, and therefore read, and she listed nine ways of sharing articles:

  1. email
  2. internal networks
  3. the cloud
  4. reference managers
  5. learning manager
  6. research social networks
  7. general social networks
  8. blogs
  9. other

Lorraine also introduced some work that COUNTER are doing jointly with CrossREF: DOI tracking and Distributed Usage Logging that are definitely worth further reading and investigation!

Wouter Haak from Elsevier spoke about what you can see about readers of your articles on Mendeley’s dashboard, as an author. He also spoke about a prototype they are developing for libraries, on which institutions could see the countries where collaborations are taking place from within their own institution. More intriguingly (to me), he talked about a working group that he was part of, whereby major scientific publishers are apparently agreeing to support sharing of articles amongst researchers within collaboration groups, on platforms like Mendeley, Academia.edu and ResearchGate, which he describes as “Scholarly Collaboration Networks”. Through such a collaboration, the sharing activity across these platforms could all be tracked and reported on. Perhaps it is easier to lure researchers away from email than to track emails!

 

[Photo credit: Got Credit]

12 reasons scholars might cite: citation motivations

I’m sure I read something similar about this once,  and then couldn’t find it again lately… so here is my quick list of reasons why researchers might cite. It includes “good” and “bad” motivations, and might be useful when considering bibliometric indicators. Feel free to comment on this post and suggest more possible motivations. Or indeed any good sources!

  1. Set own work in context
  2. Pay homage to experts
  3. Give credit to peers
  4. Criticise/correct previous work (own or others)
  5. Signposting under noticed work
  6. Provide further background reading
  7. Lend weight to own claims
  8. Self citations to boost own bibliometric scores and/or signpost own work
  9. Boost citations of others as part of an agreement
  10. Gain favour with journal editor or possible peer reviewers by citing their work
  11. Gain favour by citing other papers in the journal of choice for publication
  12. Demonstrate own wide reading/knowledge

Is this research article any good? Clues when crossing disciplines and asking new contacts.

As a reader, you know whether a journal article is good or not by any number of signs. Within your own field of expertise, you know quality research when you see it: you know, because you have done research yourself and you have read & learnt lots about others’ research. But what about when it’s not in your field of expertise?

Perhaps the most reliable marker of quality is, if the article has been recommended to you by an expert in the field. But if you find something intriguing for yourself that is outside of your usual discipline, how do you know if its any good? It’s a good idea to ask someone for advice, and if you know someone already then great, but if not then there’s a lot you can do for yourself, before you reach out for help, to ensure that you strike a good impression on a new contact.

Librarians teach information skills and we might suggest that you look for such clues as:

  1. relevance: skim the article: is it something that meets your need? – WHAT
  2. the author(s): do you know the name: is it someone whose work you value? If not, what can you quickly find out about them, eg other publications in their name or who funds their work: is there a likely bias to watch out for? – WHO & WHY 
  3. the journal title/publisher: do you already know that they usually publish high quality work? Is it peer reviewed and if so, how rigorously? What about the editorial board: any known names here? Does the journal have an impact factor? Where is it indexed: is it in the place(s) that you perform searches yourself? – WHERE 
  4. date of publication: is it something timely to your need? – WHEN
  5. references/citations: follow some: are they accurate and appropriate? When you skim read the item, is work from others properly attributed & referenced? – WHAT
  6. quality of presentation: is it well written/illustrated? Of course, absolute rubbish can be eloquently presented, and quality research badly written up. But if the creators deemed the output of high enough value for a polished effort, then maybe that’s a clue. – HOW
  7. metrics: has it been cited by an expert? Or by many people? Are many reading & downloading it? Have many tweeted or written about it (altmetrics tools can tell you this)? But you don’t always follow the crowd, do you? If you do, then you might miss a real gem, and isn’t your research a unique contribution?! – WHO

I usually quote Rudyard Kipling at this point:

I keep six honest serving-men
(They taught me all I knew);
Their names are What and Why and When
And How and Where and Who.

So far, so Library school 101. But how do you know if the research within is truly of high quality? If most published research findings are false, as John Ioannides describes, then how do you separate the good from the bad research?

An understanding of the discipline would undoubtedly help, and speed up your evaluation. But you can help yourself further, partly in the way you read the paper. There are some great pieces out there about how to read a scientific paper, eg from Natalia Rodriguez.

As I read something for the first time, I look at whether the article sets itself in the context of existing literature and research: Can you track and understand the connections? The second thing I would look at is the methodology/methods: have the right ones been used? Now this may be especially hard to tell if you’re not an expert in the field, so you have to get familiar with the methodology used in the study, and to think about how it applies to the problem being researched. Maybe coming from outside of the discipline will give you a fresh perspective. You could also consider the other methodologies that might have applied (a part of peer review, for many journals). I like the recommendation from Phil Davis in the Scholarly Kitchen that the methodology chosen for the study should be appropriate or persuasive.

If the chosen methodology just doesn’t make sense to you, then this is a good time to seek out someone with expertise in the discipline, for a further explanation. By now you will have an intelligent question to ask such a contact, and you will be able to demonstrate the depth of your own interest. How do you find a new contact in another discipline? I’ll plug Piirus here, whose blog I manage: it is designed to quickly help researchers find collaborators, so you could seek contacts & reading recommendations through Piirus. And just maybe, one day your fresh perspective and their expertise could lead to a really fruitful collaboration!

After the Frankfurt book fair: full of inspiration!

Photo of me ready to speak
Is the “Data-Librarian” the Future of Library Science?

Earlier this month I was lucky enough to attend the enormous, international Frankfurt book fair, as I was a panellist for Elsevier’s Hot Spot discussion on the future of library science and the data-librarian.  I highly recommend the opportunity & experience, as the Elsevier staff really looked after their speakers and I got to meet not only my fellow panellists but also some of the audience who came and introduced themselves at the “hot spot cafe” immediately after our discussion.

 

 

Photo of panellists & our moderator
Left to right: Noelle Gracy, Jenny Delasalle, Dr Schnelling, Prof. Dr. Petra Düren, Pascalia Boutsiouci

The session itself was filmed, and there was a professional photographer there (I have permission to use these official pictures), so I’m sure you’ll find out more about it over on Elsevier’s website: watch the LibraryConnect section! Our basic panel structure was that we were asked questions by Elsevier’s Noelle Gracy, which came from the community in advance.

What did we cover?

Well, I didn’t get to take notes as well as to talk(!) so I can tell you what I had prepared to say, and what I remember, one week after the event! Here are some nutshell points:

  • The future of library science encompasses more than just data librarianship, of course!
  • Librarians may find that adding skills with data to their CV opens up more job opportunities in the future.
  • Librarians offer a lot to the data community, not least their professional ethics & knowledge of legal expectations, which of course is covered in the module I teach to KCL/Humboldt University’s MA Digital Curation students.
Photo of me with microphone, discussing with fellow panellists
Getting to hear each other’s opinions

Librarians also have:

  • ability to describe items/create valuable metadata records
  • connections with all disciplines across campus (& library building is often central too)
  • experience of assessing quality and significance for collection management
  • skills in training & informing others
  • It’s certainly not all about technical skills: Dr Schnelling was very clear about that point, as I believe it was his question, about what skills future librarians need. But of course there are some technical skills that will help if you are working with data. Especially when considering preservation needs.
  • One easy way to begin familiarising yourself with data management issues, is to look at data management plans, and what they involve.

If you were there, then maybe you can share some more highlights of the talk by leaving a comment, below. I will also blog here again about some of my other top sights from the fair: after the talk, I went around many of the stalls, looking for things specifically German. Of course, it was an international fair, so I found an awful lot more. I will end here with a final photograph of the audience for our panel session. If you were there, then thanks for coming!

photo of audience looking at the Hot Spot stage
Standing room only!

Story telling and new ideas to listen to, for information professionals

When I’m just warming up of a morning, I like to listen to BBC Radio 4 podcasts. I’ve been picking my way through the series called Four Thought, where speakers share stories and ideas. There are three episodes in particular that I’d like to highlight for information professionals:

Maria Popova: The Architecture of Knowledge – a fascinating look at the way we handle information and create wisdom, incorporating views on knowledge from history but considering the modern, digital era of information overload. A great story!

Rupert Goodwins – tracks human behaviour on the Internet and considers: How can the Internet bring us together to discuss and share with each other in a respectful, reasoned way? How can we avoid arguments and incivility? The speaker has lots of experience and ideas.

This last talk is of interest because of the course I’ve been teaching at the Humboldt Uni IBI, on Information ethics. In the course, we explore all sorts of issues, including policies for websites that the students as information professionals of the future might play a part in hosting, and the ethical matters behind them, such as authenticity vs anonymity, moderating comments, handling whistleblowers, etc.

Another Four Thought that I found a little bit uncomfortable to listen to was:

Cindy Gallop: Embracing Zero Privacy – recommends taking control of your digital presence, and I agree with that. The speaker has some good ideas, chiefly that “we are what we do” in a very positive and empowering way, but what I find difficult is the notion that we can all live in such an open way. What about people who live in a society that is unaccepting of who they are?What about mistakes from the past, for which a debt has been paid: should they be laid forever bare? What about keeping a personal life personal, even whilst sharing matters of professional interest? On balance, I’m not a fan of zero privacy but this talk is a great opener for discussion.

There are plenty of other talks that provide food for thought in the Radio 4 podcast archives, on all sorts of topics and not only in the Four Thought series. I also like the Reith Lectures, the “Life Scientific”, and “In Our Time”… so much more to listen to!

Quality measurement: we need landscape-reading skills. 5 tips!

OLYMPUS DIGITAL CAMERA

The academic publishing landscape is a shifting one. I like to watch the ALPSP awards, to see what’s happening in academic publishing, across the disciplines, and indeed to keep an eye on the e-learning sector. Features of the landscape are shifting under our feet in the digital age, so how can we find our way through them? I think that we need to be able to read the landscape itself. You can skip to the bottom of this post for my top tips, or read further for more explanation & links!

One of the frequent criticisms levelled at open access journals, has been that they were not all about high quality work. Indeed, with an incentive to haul in as many author payments as possible, a publisher might be tempted to lower the quality threshold and publish more articles. An article in the Guardian by Curt Rice, from two years ago explains some of this picture, and more.

However, quality control is something important to all journals, whether OA or not: in order to attract the best work, they have to publish it alongside similar quality articles. Journal and publisher brands matter. As new titles, often with new publishers, OA journals once needed to establish their quality brands: this is no longer the case for all OA journals. Andrew Bonamici wrote a nice blogpost on identifying the top OA journals in 2012.

And of course, OA journals, being new and innovative, have had the opportunity to experiment with peer review mechanisms. Peer review is the gold standard of quality filters for academic journals, as I explored in earlier blogposts.  So, messing with this is bound to lead to accusations of lowering the quality! But not all OA journals vary from the gold standard: many use peer review, just as traditional journals do.

In reality, peer review happens in different ways at different journals. It might be open, blind or double blind. It might be carried out by two or three reviewers, and an editor might or might not have a final decision. The editor might or might not mediate the comments as sent back to the author, in order to assist in the article’s polishing. The peer reviewers might get guidelines on what is expected of them, or not. There is a variety of practice in peer review, from one discipline to the next, and one publisher to the next, if not from one journal to the next. And as the Guardian article I mentioned earlier points out, dummy or spoof articles have been known to make it through peer review processes. So peer review itself is not always a garuantee of quality. Rather, is a sign to watch out for, in our landscape.

For some academic authors there are quality lists for their discipline, but how good are the lists? A recent article in the Times Higher by Dennis Tourish criticises the ABS guide to journal quality, which has often been used in business and management studies. Australia’s ERA once used journal rankings, but dropped them, as this article by Jill Rowbotham described.

Fortunately, academics know how to think for themselves. They know how to question what they find. They don’t always accept what they’re told! So, we librarians can tell them where to find such lists. We can show them how to look up a journal h-index or its impact factor, and we can explain what a cited half life is (I like Anne-Wil Harzing’s website for information on this). But, as with the traditional reference interview, the real skill for the author is in knowing what what you need.

There will always be a compromise: a slightly lower ranked journal that has a faster turnaround. A slower journal that has better peer review mechanisms for helping you to polish your work. The fast, innovative young journal that will market your work heavily. Not to mention the match of the subject of the article! There are many factors for the author to consider.

So how do we read the landscape? Here are my tips:

  1. We can take a look at the old guides, of course: the lists are not completely redundant but we need to question whether what we see matches what they describe.
  2. We can question whether a score or measure is for a characteristic that we value.
  3. We can talk to people who have been there before, i.e. experienced, published authors.
  4. We can tentatively scout ahead, and try a few things out with our most experimental work.
  5. We can scan the horizon, and watch what pioneers are doing: what works well there? As well as the sources I mention in my opening paragraph, I like to read the Scholarly Kitchen for horizon scanning.

Ultimately, we need to be alert, and to draw on all our knowledge and experiences, we need to be open and aware of our publishing needs. The best way to do this is to be a reader and consumer of published outputs in your discipline, and a member of the academic community. That way, you will know what your goal looks like, and you’ll recognise it when you see it, out there in the shifting sands of academia.

How do you assess the quality of recommendations?

I wrote here last year about the marvellous Fishscale of academicness, as a great way to teach students information literacy skills by starting with how evaluate what they’ve found.  I’m currently teaching information ethics to Masters students at Humboldt Uni, and this week’s theme is “Trust”: it touches on all sorts of interesting topics in this area, including recommendation systems, also known as recommendation engines.

An example of such a recommendation system in action would be the customer star ratings for products on Amazon, which are averaged out and may be used as a way to suggest further purchases to customers, amongst other information. Or reviews for hotels/cafes on Tripadvisor, film suggestions on Netflix, etc. Recommendations are everywhere these days: Facebook recommends apps you might like, and will suggest “people you may know” : LinkedIn and Twitter work in similar ways.

For me, these recommendations beg certain questions, which also turn up in debates about privacy and about altmetrics, such as:

How much information do you have to give them about yourself, do you trust them with it, and how good are their recommendations anyway? Are you happy to be influenced by what others have done/said online?

Recommendation systems use “relevance” algorithms, which are similar to those used when you perform a search. They might combine a number of factors, including:

  • Items you’ve already interacted with (i.e. suggesting similar items, called an item-to-item approach)
  • User-to-user: it finds people who are similar to you, eg they have displayed similar choices to you already, and suggests things based on their choices
  • Popularity of items (eg Facebook recommends apps to you depending on how much use they’ve had) Note that this may have to be balanced against novelty: new items will necessarily not have achieved high popularity.
  • Ratings from other users/customers (here, they might weight certain users’ scores more heavily, or average star ratings, or just preference items with a review)
  • Information that they already have about you, against a profile of what such a person might like (eg information gleaned from tracking you online through your browser or on your user profile on their site, or that you have given them in some way)

The sophistication of the algorithm used and the size of the data pool drawn on (or lack thereof) might also depend on the need for speed of the system.

Naturally, those working on recommendation engines have given quite a bit of consideration to how they might evaluate the recommendations given, as this paper from Microsoft discusses, in a relatively accessible way. It introduces many relevant concepts, such as the notion that recommending things that it knows you’ve already seen will increase your trust in the recommendations, although it is very difficult to measure trust in a test situation.

We see that human evaluation of these recommendation systems is important as “click through rate (CTR)” is so easily manipulated and inadequate as a measure of the usefulness of recommendations, as described and illustrated in this blog post by Edwin Chen.

Which recommendations do you value, and why? I also came across a review of movie recommendation sites from 2009, which explains why certain sites were preferred, which gives plenty of food for thought. From my reading and experience, I’d start my list of the kind of things that I’d like from recommendation systems with:

  • It doesn’t take information about me without asking me first (lots of sites now have to tell you about cookies, as the Cookie collective explain)
  • It uses a minimal amount of information that I’ve given it (and doesn’t link with other sites/services I’ve used, to either pull in or push out data about me, unless I tell it that it can!)
  • Suggestions are relevant to my original interest, but with the odd curveball thrown in, to support a more serendipitous discovery and to help me break out of the “filter bubble
  • Suggestions feature a review that was written by a person (in a language that I speak), so more than just a star rating
  • Suggestions are linked in a way that allows me to surf and explore further, eg filtering for items that match one particular characteristic that I like from the recommendation
  • I don’t want the suggestions to be too creepily accurate: I like to think I’ve made a discovery for myself, and I doubt the trustworthiness of a company that knows too much about me!

I’m sure there’s more, but I’m equally sure that we all want something slightly different from recommendation systems! My correspondence with Alke Groeppel-Wegener suggests that her students are very keen on relevance and not so interested in serendipity. For me, if that relevance comes at the expense of my privacy, so that I have to give the system lots of information about myself, then I definitely don’t want it. What about you?

12 Questions to ask, for basic clues on the quality of a journal

When choosing where to publish a journal article, what signs do you look out for? Here are some questions to ask or aspects to investigate, for clues.

1 – Is it peer reviewed? (Y/N and every nuance in between) See the journal’s website.
2- Who is involved in it? The editor & publisher? Are they well known & well thought of? Who has published articles there already: are these big players in your field? Read the journal!
3- Is it abstracted/indexed by one of the big sources in your field? (The journal’s website should tell you this. Big publishers also offer their own databases of house journals)
4- What happens when you search on Google for an article from the journal? Do you get the article in the top few results? And on GScholar?
5- Does it appear in Web of Science or Scopus journal rankings?
6- Take a look on COPAC: which big research libraries subscribe?
7- have a look at the UK’s published RAE2008 / forthcoming REF2014 data and see if articles from that journal were a part of the evidence submitted, and rated as 4*
8- Do the journal articles have DOIs? This is a really useful feature for promotion of your article, and it will mean that altmetric tools can provide you with evidence of engagement with your article.
9- Is there an open access option? (See SherpaRomeo) This is a requirement of many research funders, but it is also useful for you, when you want to promote your article.
10- Is it on the list of predatory OA journals? You might want to avoid those, although check for yourself. Note that some journals on the list are disputed/defended against the accusation of predation!
11- Is it listed on the ISSN centre’s ROAD: http://road.issn.org/ What does this tell you about it?
12- If you have access through a library subscription, is it listed on Ulrich’s periodicals directory? What does this tell you about it? Note the “peer review” symbol of a striped referee’s shirt: if the shirt is not there, it doesn’t necessarily mean that the journal is not peer reviewed: you may have to investigate further.
FURTHER NUANCES…
– What type of peer review is used? Is it rigorous? Is it useful to you, even if you get rejected?
– Time to rejection/acceptance: how soon do you need to be published?
– Acceptance/rejection rate
– Journal Impact Factor/ SJR score(s) /quartile for the field

Curating online content and recording information sources: tools I’ve used

I’ve mentioned in an earlier blog-post that the tool I value most for this at the moment, is Evernote. But there are some other tools I’ve had a good look at:

ScoopIt is also a pretty good curation tool, and if you use it often to discover content and link it up with Twitter (there’s bound to be an IFTT recipe), you can look more original on Twitter at the same time as creating something more visually attractive and useful for yourself than you could do with Twitter alone. The problem I’ve discovered is that your ScoopIt stories look out of date pretty quickly if it’s not a primary tool for you, and I can’t vouch for it being the best place to discover content: a better way to use it might be to investigate the bookmarklet tool.

Another such tool that I’m aware of is paper.li, largely because of one particular user who picks up on my tweets and reports on them there & tweets at me to alert/acknowledge them, which is a pretty nice, social way to curate/collate content and report on it.

I used Storify for collecting tweets relating to the Finch report on open access, and I still refer to the collection from time to time. I think Storify is particularly good at collecting tweets about a particular theme, but you can also use it to collect websites and material from other sources. Apparently, Storify also has a bookmarklet tool which I would use if I intended to invest more in Storify.

I also created a collection (or two) of academic papers on EndNote when I was at Warwick, and I exported and then imported the bibliographic data into Mendeley, for future reference. The reason I don’t use either Mendeley or EndNote so much these days is really that I’m not using so much scholarly content. If I were, I’d also want to investigate Zotero as an alternative: it’s a long time since I investigated it but it has a good reputation amongst researchers I’ve met. I note that EndNote’s Desktop version seems to remain the best at re-formatting your bibliographic data into the various styles for journal publication.

I used to use Delicious for website bookmarks but when it lost some features that I valued, I migrated my bookmark collection over to Diigo. Both of these tools, like Evernote, have handy content-adding tools for my browser toolbar (bookmarklets). My Diigo collection is nicely tagged but not maintained so much these days, because I prefer the way Evernote copies the content of sites. I once spent some considerable time weeding out dead links from my bookmarks, so it seems to me better to have a copy of content for future reference, in case the original webpage is moved/removed: apparently, Pocket can also do this.

Overall, the convenience of Evernote prevails, for me. It’s apparently a “productivity” tool and not only for content curation, although that’s how I use it at present: I know it’s more powerful. (I’m sensing that “productivity” is a keyword for folks at companies who provide these tools, especially Mendeley in their recent webinar for Librarians.)

Brian Kelly’s blog post on Evernote from Jan 2014 compares it to Simplenote, explaining why he’s sticking with Evernote. And if you want to explore productivity tools further, you could do worse than looking at the Libguide from the University of Minnesota on “Digital Academic Workflow tools”.