Snowy Stockholm and Nordic Librarians!

Picture from Twitter
Picture from Twitter @Micha2508

Last week I attended Elsevier’s Nordic Library Connect event in Stockholm, Sweden. I presented the metrics poster / card and slide set that I researched for Elsevier already. It’s a great poster but the entire set of metrics take some digesting. Presenting them all as slides in around 30 minutes was not my best idea, even for an audience of librarians! The poster itself was popular though, as it is useful to keep on the wall somewhere to refer to, to refresh your knowledge of certain metrics:

https://libraryconnect.elsevier.com/sites/default/files/ELS_LC_metrics_poster_V2.0_researcher_2016.pdf

I reflected after my talk that I should probably have chosen a few of the metrics to present, and then added more information and context, such as screen captures of where to find these metrics in the wild. It was a very useful experience, not least because it gave me this idea, but also because I got to meet some lovely folks who work in libraries in the Scandinavian countries.

UPDATE 23 Nov 2016: now you can watch a video of my talk (or one of the others) online.

I met these guys... but also real people!
I met these guys… but also real people!

I particularly valued a presentation from fellow speaker, Oliver Renn of ETH, Zurich. He has obviously built up a fantastic relationship with the departments that his library serves. I thought that the menus he offered were inspired. These are explained in the magazine that he also produces for his departments: see p8 of this 2015 edition.

See tweets from the event by clicking on the hashtag in this tweet:

 

Advertisements

Reflections and a simple round-up of Peer Review Week 2016

It has been Peer Review Week this week: I’ve been watching the hashtag on Twitter with interest (and linked to it in a blogpost for piirus.ac.uk) and on Monday I attended a webinar called “Recognising Review – New and Future Approaches or acknowledging the Peer Review Process”.

I do like webinars, as I’ve blogged before: professional development/horizon scanning from my very own desktop! This week’s one featured talks from Paperhive and Publons, amongst others, both of which have been explored on this blog in the past. I was particularly interested to hear that Publons are interested in recording not only peer review effort, but also editorial contributions. (Right at the end of the week this year, there have been suggestions that editorial work be the focus of next year’s peer review week so it seems to me that we’ve come full circle.) A question from the audience raised the prospect of a new researcher metric based on peer review tracking. I guess that’s an interesting space to watch!

I wondered where Peer Review Week came from: it seems to be a publisher initiative if Twitter is anything to go by: the hashtag is dominated by their contributions. On Twitter at least, it attracted some publisher criticism: if you deliberately look at ways to recognise peer review then some academics are going to ask whether it is right for publishers to profit so hugely from their free work. Some criticisms were painful to read and some were also highly amusing:

There were plenty of link to useful videos, webpages and infographics about how to carry out peer review, both for those new to it and for those already experienced, such as:

(On this topic, I thought that an infographic from Elsevier about reasons why reviewers refused to peer review was intriguing.)

Advice was also offered on how / how not to respond to peer reviews. My favourite:

And there were glimpses of what happens at the publisher or editor level:

There wasn’t much discussion of the issue of open vs blind or double blind peer review, which I found interesting because recognition implies openness, at least to me. And there was some interesting research reported on in the THE earlier this month, about eliminating gender bias through double blind reviews, so openness in the context of peer review is an issue that I feel torn about. Discussion on Twitter seemed to focus mostly on incentives for peer review, and I suppose recognition facilitates that too.

Peer Review Week has also seen one of the juiciest stories in scholarly communication: fake peer reviews! We’ve been able to identify so much dodgy practice in the digital age, from fake papers and fake authors to fake email addresses so that you can be your own peer reviewer and citation rings. Some of this is, on one level, highly amusing: papers by Maggie Simpson, or a co-author who is, in fact your cat. But on another level it is also deeply concerning, and so it’s a space that will continue to fascinate me because it definitely looks like a broken system: how do we stick it all together?

How do researchers share articles? Some useful links

This is a topic that interests me: how do researchers choose what to read? Where are the readers on our platforms coming from, when we can’t track a source URL? What are researchers doing in collaboration spaces? (Research processes are changing fast in the Internet era.) Is journal article sharing that is taking place legal and/or ethical? I’m a big fan of Carol Tenopir‘s work investigating readers’ behaviours and I think there’s much to learn in this area. Sharing an article does not equate to it having been read, but it is a very interesting part of the puzzle of understanding scholarly communication.

16649920968_f671108c56_z

Usage is something that altmetrics are displaying (the altmetric.com donut has a section for “Readers” which incorporates information from Mendeley), and it’s just possible that usage would become a score to rival the impact factor, when evaluating journals. It does often seem to me like we’re on a quest for a mythical holy grail, when evaluating journals and criticising the impact factor!

Anyway, what can we know about article sharing? In my last blogpost I highlighted BrightTALK as a way to keep up to date with library themes. The LibraryConnect channel features many useful webinars & presentations (yes, I spoke at one of them), and I recently listened to a webinar on the theme of this blogpost’s title, which went live in December 2015. My notes & related links:

Suzie Allard of the University of Tennessee (colleague of Carol Tenopir) spoke about the “Beyond Downloads” project and their survey’s main takeaways. These include that nearly 74% of authors preferred email as a method of sharing articles. Authors may share articles to aid scientific discovery in general, to promote their own work, or indeed for other reasons, nicely illustrated in an infographic on this theme!

Lorraine Estelle of Project COUNTER spoke about the need for comprehensive and reliable data, and to describe just how difficult it is to gather such data. (I can see that tracking everyone’s emails won’t go down well!) There are obviously disciplinary and demographic differences in the way that articles are shared, and therefore read, and she listed nine ways of sharing articles:

  1. email
  2. internal networks
  3. the cloud
  4. reference managers
  5. learning manager
  6. research social networks
  7. general social networks
  8. blogs
  9. other

Lorraine also introduced some work that COUNTER are doing jointly with CrossREF: DOI tracking and Distributed Usage Logging that are definitely worth further reading and investigation!

Wouter Haak from Elsevier spoke about what you can see about readers of your articles on Mendeley’s dashboard, as an author. He also spoke about a prototype they are developing for libraries, on which institutions could see the countries where collaborations are taking place from within their own institution. More intriguingly (to me), he talked about a working group that he was part of, whereby major scientific publishers are apparently agreeing to support sharing of articles amongst researchers within collaboration groups, on platforms like Mendeley, Academia.edu and ResearchGate, which he describes as “Scholarly Collaboration Networks”. Through such a collaboration, the sharing activity across these platforms could all be tracked and reported on. Perhaps it is easier to lure researchers away from email than to track emails!

 

[Photo credit: Got Credit]

Publish then publicise & monitor. Publication is not the end of the process!

Once your journal article or research output has been accepted and published, there are lots of things that you can do to spread the word about it. This blogpost has my own list of the top four ways you could do this (other than putting it on your CV, of course). I also recommend any biologists or visual thinkers to look at:
Lobet, Guillaume (2014): Science Valorisation. figsharehttp://dx.doi.org/10.6084/m9.figshare.1057995
Lobet describes the process as “publish: identify yourself: communicate”, and points out useful tools along the way, including recommending that authors identify themselves in ORCID, ResearchGate, Academia.edu, ImpactStory and LinkedIn. (Such services can create a kind of online, public CV and my favourite for researchers is ORCID.) You may also find that your publisher offers advice on ways to publicise your paper further.

PUBLICISE

1) Talk about it! Share your findings formally at a conference. Mention it in conversations with your peers. Include it in your teaching.

2) Tweet about it! If you’re not on Twitter yourself (or even if you are!) then you could ask a colleague to tweet about it for you. A co-author or the journal editor or publisher might tweet about it, or you could approach a University press officer. If you tweet yourself then you could pin the tweet about your latest paper to your profile on Twitter.

3) Open it up! Add your paper to at least one Open Access repository, such as your institutional repository (they might also tweet about it). This way your paper will be available even to those who don’t subsribe to the journal. You can find an OA repository on ROAR or OpenDOAR. Each repository will have its own community of visitors and ways in which to help people discover your content, so you might choose more than one repository: perhaps one for your paper and one for data or other material associated with it. If you put an object into Figshare, for example, it will be assigned a DOI and that will be really handy for getting Altmetrics measures.

4)Be social! Twitter is one way to do this already, of course. but you could also blog about it, on your own blog or perhaps as a guest post for an existing blog with a large audience already. You could put visual content like slides and infographics into Slideshare, and send out an update via LinkedIn. Choose at least one more social media channel of your choice, for each paper.

MONITOR

  1. Watch download stats for your paper, on your publisher’s website. Measuring the success of casual mentions is difficult, but you can often see a spike in download statistics for a paper, after it has been mentioned at a conference.
  2. Watch Twitter analytics: is your tweet about your paper one of your Top Tweets? You can see how many “engagements” a tweet has, i.e., how many clicks, favourites, re-tweets and replies, etc it accrued. If you use a link shortening service, you should also be able to see how many clicks there have been on your link, and where from. (bit.ly is one of many such shortening services.) This is the measure that I value most. If no-one is clicking to look at your content, then perhaps Twitter is not working for you and you could investigate why not or focus on more efficient channels.
  3. Repositories will often offer you stats about downloads, just like your publisher, and either or both may offer you access to an altmetrics tool. Take a look at these to see more information behind the numbers: who is interested and engaged with your work and how can you use this knowledge? Perhaps it will help you to choose which of the other possible social media channels you might use, as this is where there are others in your discipline who are already engaged with your work.

 

Ultimately, you might be interested in citations rather than engagements on Twitter or even webpage visits or downloads for your paper. It’s hard to draw a definite connection between such online activity and citations for journal papers, but I’m pretty sure that no-one is going to cite your paper if they don’t even know it exists, so if this is important to you, then I would say, shout loud!

Ensuring quality and annotating scientific publications. A summary of a Twitter chat

Screenshot of twitter conversation
Tweet tweet!

Last year (yes, I’m slow to blog!), I had a very productive conversation (or couple of conversations) on Twitter with a former colleague & scientist at the University of Warwick, Andrew Marsh, which are worth documenting here as a way to give them a narrative, and to illustrate how Twitter sometimes works.

Back in November 2015, Andrew tweeted to ask who would sign reviews of manuscripts, when reporting on a presentation by Chief Editor of Nature Chemistry,  Stuart Cantrill. I replied on Twitter by asking whether such openness would make the reviewers take more time over their reviews (thereby slowing peer review down). I wondered whether openness would make reviewers less direct and so therefore possibly less helpful as more open to interpretation. Also, whether such open criticisim would drive authors to engage in even more “pre-submission”, informal peer reviewing.

Andrew tells me that, at the original event “a show of hands and brief discussion in the room revealed that PIs or those who peer reviewed manuscripts regularly, declared themselves happy to reveal their identity whereas PhD students or less experienced researchers felt either unsure or uncomfortable in doing so.”

Our next chat was kick-started when Andrew pointed me to a news article from Nature that highlighted a new tool for annotating web pages, Hypothes.is. In our Twitter chat that ensued we considered:

  1. Are such annotations a kind of post-publication peer review? I think that they can work alongside traditional peer review, but as Andrew pointed out, they lack structure so they’re certainly no substitute.
  2. Attribution of such comments is important so that readers would know whose comments they are reading, and also possibly enable tracking of such activity, so that the work could be measured. Integration with ORCID would be a good way to attribute comments. (This is already planned, it seems: Dan Whaley picked up on our chat here!)
  3. Andrew wondered whether tracking of such comments could be done for altmetrics. Altmetric.com responded. Comments on Hypothes.is could signal scholarly attention for the work which they comment on, or indeed attract attention themselves. It takes a certain body of work before measuring comments from such a source becomes valuable, but does measuring itself incentivise researchers to comment? I’m really interested in the latter point: motivation cropped up in an earlier blogpost of mine on peer review. I suspect that researchers will say that measurement does not affect them, but I’m also sure that some of those are well aware of, eg their ResearchGate score!
  4. Such a tool offers a function similar to marginalia and scrawls in library books. Some are helpful shortcuts (left by altruists, or just those who wanted to help their future selves?!), some are rubbish (amusing at their best), and sometimes you recognise the handwriting of an individual who makes useful comments, hence the importance of attribution.
  5. There are also some similarities with social bookmarking and other collaboration tools online, where you can also publish reviews or leave comments on documents and publications.

And who thought that you couldn’t have meaningful conversations on Twitter?! You can also read responses on Twitter to eLife‘s tweet about its piece on the need for open peer review.

The best part of this conversation between Andrew and me on Twitter was the ability to bring in others, by incorporating their Twitter handles. We also picked up on what others were saying, like this tweet about journal citation distributions from Stephen Curry. The worst parts were trying to be succinct when making a point (and wanting to develop some points); feeling a need to collate the many points raised and forgetting to flag people sometimes.

Just as well you can also blog about these things, then!

 

12 reasons scholars might cite: citation motivations

I’m sure I read something similar about this once,  and then couldn’t find it again lately… so here is my quick list of reasons why researchers might cite. It includes “good” and “bad” motivations, and might be useful when considering bibliometric indicators. Feel free to comment on this post and suggest more possible motivations. Or indeed any good sources!

  1. Set own work in context
  2. Pay homage to experts
  3. Give credit to peers
  4. Criticise/correct previous work (own or others)
  5. Signposting under noticed work
  6. Provide further background reading
  7. Lend weight to own claims
  8. Self citations to boost own bibliometric scores and/or signpost own work
  9. Boost citations of others as part of an agreement
  10. Gain favour with journal editor or possible peer reviewers by citing their work
  11. Gain favour by citing other papers in the journal of choice for publication
  12. Demonstrate own wide reading/knowledge

Is this research article any good? Clues when crossing disciplines and asking new contacts.

As a reader, you know whether a journal article is good or not by any number of signs. Within your own field of expertise, you know quality research when you see it: you know, because you have done research yourself and you have read & learnt lots about others’ research. But what about when it’s not in your field of expertise?

Perhaps the most reliable marker of quality is, if the article has been recommended to you by an expert in the field. But if you find something intriguing for yourself that is outside of your usual discipline, how do you know if its any good? It’s a good idea to ask someone for advice, and if you know someone already then great, but if not then there’s a lot you can do for yourself, before you reach out for help, to ensure that you strike a good impression on a new contact.

Librarians teach information skills and we might suggest that you look for such clues as:

  1. relevance: skim the article: is it something that meets your need? – WHAT
  2. the author(s): do you know the name: is it someone whose work you value? If not, what can you quickly find out about them, eg other publications in their name or who funds their work: is there a likely bias to watch out for? – WHO & WHY 
  3. the journal title/publisher: do you already know that they usually publish high quality work? Is it peer reviewed and if so, how rigorously? What about the editorial board: any known names here? Does the journal have an impact factor? Where is it indexed: is it in the place(s) that you perform searches yourself? – WHERE 
  4. date of publication: is it something timely to your need? – WHEN
  5. references/citations: follow some: are they accurate and appropriate? When you skim read the item, is work from others properly attributed & referenced? – WHAT
  6. quality of presentation: is it well written/illustrated? Of course, absolute rubbish can be eloquently presented, and quality research badly written up. But if the creators deemed the output of high enough value for a polished effort, then maybe that’s a clue. – HOW
  7. metrics: has it been cited by an expert? Or by many people? Are many reading & downloading it? Have many tweeted or written about it (altmetrics tools can tell you this)? But you don’t always follow the crowd, do you? If you do, then you might miss a real gem, and isn’t your research a unique contribution?! – WHO

I usually quote Rudyard Kipling at this point:

I keep six honest serving-men
(They taught me all I knew);
Their names are What and Why and When
And How and Where and Who.

So far, so Library school 101. But how do you know if the research within is truly of high quality? If most published research findings are false, as John Ioannides describes, then how do you separate the good from the bad research?

An understanding of the discipline would undoubtedly help, and speed up your evaluation. But you can help yourself further, partly in the way you read the paper. There are some great pieces out there about how to read a scientific paper, eg from Natalia Rodriguez.

As I read something for the first time, I look at whether the article sets itself in the context of existing literature and research: Can you track and understand the connections? The second thing I would look at is the methodology/methods: have the right ones been used? Now this may be especially hard to tell if you’re not an expert in the field, so you have to get familiar with the methodology used in the study, and to think about how it applies to the problem being researched. Maybe coming from outside of the discipline will give you a fresh perspective. You could also consider the other methodologies that might have applied (a part of peer review, for many journals). I like the recommendation from Phil Davis in the Scholarly Kitchen that the methodology chosen for the study should be appropriate or persuasive.

If the chosen methodology just doesn’t make sense to you, then this is a good time to seek out someone with expertise in the discipline, for a further explanation. By now you will have an intelligent question to ask such a contact, and you will be able to demonstrate the depth of your own interest. How do you find a new contact in another discipline? I’ll plug Piirus here, whose blog I manage: it is designed to quickly help researchers find collaborators, so you could seek contacts & reading recommendations through Piirus. And just maybe, one day your fresh perspective and their expertise could lead to a really fruitful collaboration!

How do you assess the quality of recommendations?

I wrote here last year about the marvellous Fishscale of academicness, as a great way to teach students information literacy skills by starting with how evaluate what they’ve found.  I’m currently teaching information ethics to Masters students at Humboldt Uni, and this week’s theme is “Trust”: it touches on all sorts of interesting topics in this area, including recommendation systems, also known as recommendation engines.

An example of such a recommendation system in action would be the customer star ratings for products on Amazon, which are averaged out and may be used as a way to suggest further purchases to customers, amongst other information. Or reviews for hotels/cafes on Tripadvisor, film suggestions on Netflix, etc. Recommendations are everywhere these days: Facebook recommends apps you might like, and will suggest “people you may know” : LinkedIn and Twitter work in similar ways.

For me, these recommendations beg certain questions, which also turn up in debates about privacy and about altmetrics, such as:

How much information do you have to give them about yourself, do you trust them with it, and how good are their recommendations anyway? Are you happy to be influenced by what others have done/said online?

Recommendation systems use “relevance” algorithms, which are similar to those used when you perform a search. They might combine a number of factors, including:

  • Items you’ve already interacted with (i.e. suggesting similar items, called an item-to-item approach)
  • User-to-user: it finds people who are similar to you, eg they have displayed similar choices to you already, and suggests things based on their choices
  • Popularity of items (eg Facebook recommends apps to you depending on how much use they’ve had) Note that this may have to be balanced against novelty: new items will necessarily not have achieved high popularity.
  • Ratings from other users/customers (here, they might weight certain users’ scores more heavily, or average star ratings, or just preference items with a review)
  • Information that they already have about you, against a profile of what such a person might like (eg information gleaned from tracking you online through your browser or on your user profile on their site, or that you have given them in some way)

The sophistication of the algorithm used and the size of the data pool drawn on (or lack thereof) might also depend on the need for speed of the system.

Naturally, those working on recommendation engines have given quite a bit of consideration to how they might evaluate the recommendations given, as this paper from Microsoft discusses, in a relatively accessible way. It introduces many relevant concepts, such as the notion that recommending things that it knows you’ve already seen will increase your trust in the recommendations, although it is very difficult to measure trust in a test situation.

We see that human evaluation of these recommendation systems is important as “click through rate (CTR)” is so easily manipulated and inadequate as a measure of the usefulness of recommendations, as described and illustrated in this blog post by Edwin Chen.

Which recommendations do you value, and why? I also came across a review of movie recommendation sites from 2009, which explains why certain sites were preferred, which gives plenty of food for thought. From my reading and experience, I’d start my list of the kind of things that I’d like from recommendation systems with:

  • It doesn’t take information about me without asking me first (lots of sites now have to tell you about cookies, as the Cookie collective explain)
  • It uses a minimal amount of information that I’ve given it (and doesn’t link with other sites/services I’ve used, to either pull in or push out data about me, unless I tell it that it can!)
  • Suggestions are relevant to my original interest, but with the odd curveball thrown in, to support a more serendipitous discovery and to help me break out of the “filter bubble
  • Suggestions feature a review that was written by a person (in a language that I speak), so more than just a star rating
  • Suggestions are linked in a way that allows me to surf and explore further, eg filtering for items that match one particular characteristic that I like from the recommendation
  • I don’t want the suggestions to be too creepily accurate: I like to think I’ve made a discovery for myself, and I doubt the trustworthiness of a company that knows too much about me!

I’m sure there’s more, but I’m equally sure that we all want something slightly different from recommendation systems! My correspondence with Alke Groeppel-Wegener suggests that her students are very keen on relevance and not so interested in serendipity. For me, if that relevance comes at the expense of my privacy, so that I have to give the system lots of information about myself, then I definitely don’t want it. What about you?

Alerts are really helpful and (alt)metrics are interesting but academic communities are key to building new knowledge.

Some time ago, I set Google Scholar to alert me if anyone cited one of the papers I’ve authored. I recommend that academic authors should do this on Scopus and Web of Science too. I forgot all about it until yesterday, when an alert duly popped into my e-mail.

It is gratifying to see that someone has cited you (& perhaps an occasional reminder to update the h-index on your CV), but more importantly, it alerts you to papers in your area of interest. This is the paper I was alerted to:

José Luis Ortega (2015) Relationship between altmetric and bibliometric indicators across academic social sites: The case of CSIC’s members. Journal of Informetrics, Volume 9, Issue 1, Pages 39-49 doi: 10.1016/j.joi.2014.11.004

I don’t have a subscription to ScienceDirect so I couldn’t read it in full there, but it does tell me the research highlights:

  • Usage and social indicators depend on their own social sites.
  • Bibliometric indices are independent and therefore more stable across services.
  • Correlations between social and usage metrics regarding bibliometric ones are poor.
  • Altmetrics could not be a proxy for research evaluation, but for social impact of science.

and of course, the abstract:

This study explores the connections between social and usage metrics (altmetrics) and bibliometric indicators at the author level. It studies to what extent these indicators, gained from academic sites, can provide a proxy for research impact. Close to 10,000 author profiles belonging to the Spanish National Research Council were extracted from the principal scholarly social sites: ResearchGate, Academia.edu and Mendeley and academic search engines: Microsoft Academic Search and Google Scholar Citations. Results describe little overlapping between sites because most of the researchers only manage one profile (72%). Correlations point out that there is scant relationship between altmetric and bibliometric indicators at author level. This is due to the almetric ones are site-dependent, while the bibliometric ones are more stable across web sites. It is concluded that altmetrics could reflect an alternative dimension of the research performance, close, perhaps, to science popularization and networking abilities, but far from citation impact.

I found a fuller version of the paper on Academia.edu and it is indeed an interesting read. I’ve read other papers that look specifically at altmetric and bibliometric scores for one particular journal’s articles, or articles from within one discipline. I like the larger scale of this study, and the conclusions make sense to me.

And my paper that it cites? A co-authored one that Brian Kelly presented at the Open Repositories 2012 conference.

Kelly, B., & Delasalle, J. (2012). Can LinkedIn and Academia.edu Enhance Access to Open Repositories? In: OR2012: the 7th International Conference on OpenRepositories, Edinburgh, Scotland.

It is also a paper that is on Academia.edu. I wonder if that’s partly why it was discovered and cited? The alt- and biblio-metrics for that paper are not likely to be high (I think of it as embryonic work, for others to build on), but participation in an online community is still a way to spread the word about what you’ve investigated and found, just like attending a conference.

Hence the title of this blog post. I find the alert useful to keep my knowledge up to date, and the citation gives me a sense of being part of the academic community, which is why I find metrics so interesting. What they tell the authors themselves is of value, beyond any performance measurement or quality signalling aspects.

Attention metrics for academic articles: are they any use?

Why do bibliometrics and altmetrics matter? They are sometimes considered to be measures of attention (see a great post on the Scholarly Kitchen about this), and they attract plenty of attention themselves in the academic world, especially amongst scholarly publishers and academic libraries.

Bibliometrics are mostly about tracking and measuring citations between journal articles or scholarly publications, so they are essentially all about attention from the academic community. There are things that an author can do in order to attract more attention and citations. Not just “gaming the system” (see a paper on Arxiv about such possibilities) but by reaching as many people as possible, in a way that speaks to them as being relevant to their research and thus worthy of a citation.

Citation, research and writing and publishing practices are evolving: journal articles seem to be citing more other papers these days (well, according to a Nature news item, that’s the way to get more cited: it’s a cycle), and researchers are publishing more journal articles (wikipedia has collated some stats) and engaging in collaborative projects (see this Chronicle of Higher Ed article). If researchers want to stay in their “business” then they will need to adapt to current practices, or to shape them. That’s not easy when it comes to metrics about scholarly outputs, because the ground is shifting beneath their feet. What are the spaces to watch?

How many outputs a researcher produces and in which journal titles or venues matter in the UK, because of the RAE and REF excercises, and the way University research is funded there.

Bibliometrics matter to Universities because of University rankings. Perhaps such rankings should not matter, but they do, and the IoE London blog has an excellent article on the topic. So researchers need to either court each others’ attention and citations, or else create authoritative rankings that don’t use bibliometrics.

Altmetrics represent new ways of measuring attention, but they are like shape-shifting clouds in comparison with bibliometrics. We’re yet to ascertain which measures of which kinds of attention, in which kinds of objects, can tell us what exactly. My own take on altmetrics is that context is the key to using them. Many people are working to understand altmetrics as measures and what they can tell us.

Attention is not a signifier of quality (As researchers well know: Carol Tenopir has done a lot of research on researchers’ reading choices and habits). Work which merits attention can do so for good or bad reasons. Attention can come from many different sources, and can mean different things: by measuring attention exchanges, we can take account of trends within different disciplines and timeframes, and the effect of any “gaming” practices.

Attention from outside of the academic community has potential as “impact”. Of course, context is important again, and for research to achieve “impact” then you’ll need to define exactly what kind of impact you intend to achieve. If you want to reach millions of people for two seconds, or engage with just one person whose life will be hugely enriched or who will have influence over others’ lives, then what you do achieve impact or how you measure your success will be different. But social media and the media can play a part in some definitions of impact, and so altmetrics can help to demonstrate success, since they track attention for your article from these channels.

Next week I’ll be sharing two simple, effective stories of twitter use and reporting on its use.