Use Twitter well, as a researcher, and report on your success

Here are two examples I’ve come across:

1. Tweet directly

I heard a great story from a researcher who tweeted event information directly at 72 of her contacts, and then they re-tweeted her message to a potential audience of around 50,000.

Note that she used Twitter to direct-message people who could help her to promote her work. The “potential audience” of 50,000 were all the followers of the people who tweeted or re-tweeted about her work. I really like this story, as a way to impress line managers with your effective use of social media. It’s simple, it’s got numbers (line managers like those!) and it demonstrates that you go beyond just tweeting into the void at whoever is following you. It’s using your network contacts properly!

2. Monitor your twitter high-hitters & report on media attention

I also noticed that the JISC headlines which land in my inbox feature a section for “Our media coverage” and a section on “Our social media activity”. It’s a very nice example of an e-mail newsletter altogether, but the “Our social media activity” section attracted my attention, because of the way it presents tweets. They look something like this:


Retweeted by 94 people with a potential reach of 84.7k
‘Forget 24-hour drinking; students want 24-hour libraries’ (via @BBCNews)


(from JISC’s May 2014 headlines)

I like this because these measures are easy to find in Twitter analytics, so that any researcher can see his/her tweets with the widest reach. JISC presumably tell folks this in their newsletter because there may be others who also find them interesting, and they are using Twitter as a filter & highlighter for you.

By monitoring your own high-hitting tweets in this way you will soon learn what your audience is interested in re-tweeting. Have you got the right audience? (If not, start following people who you would like to have follow you!) Can you tailor your message to attract their attention? (If indeed, attention is one of your goals on Twitter.)

You could also look out for “faves” and replies on Twitter, but I note that JISC is not doing so in this context.

Such a record of high-hitting tweets & of media attention might be something of interest to other team members, to line managers & possibly even research project funders if it’s part of your impact strategy to reach a broad audience.

Of course, I follow a lot of twitterers and only see a fraction of what they tweet so I know that the “potential reach” is just that, and the actual reach is likely to be considerably lower. Still, with a wider potential reach, you ought to have a wider actual reach, and those who have re-tweeted have considered your tweet to some degree. Although it is very easy to re-tweet without investigating, so I still wouldn’t claim too much without more context.

How do you get more context?

In the story of the researcher with her direct message tweets, these were about an event. So she will have a lot more information about the success of that event, I imagine.

The number of hits on the link(s) in your tweets could also indicate a more participative Twitter audience, but if it’s your website you’re promoting, then I’d rather look at the number of visitors to that site in total, as a success story to report to line managers (bigger numbers!). You could check how many visitors came there via Twitter, to see if your efforts on Twitter are paying off, but just those who click on the links you tweeted will be a smaller number than that figure, since people might also “MT” or “via” your tweet with their own shortcut links.

A journal article that you’re promoting will have altmetrics: if your publisher doesn’t collate these, your institutional repository might, or you can use ORCID and ImpactStory to do it yourself.

You could possibly do some kind of calculation that for x tweets in the course of a year, your ROI (return on investment) has been x visitors from twitter to your website/blog/article(s), although this is less simple, and it’s the simplicity of these examples that I like.

Attention metrics for academic articles: are they any use?

Why do bibliometrics and altmetrics matter? They are sometimes considered to be measures of attention (see a great post on the Scholarly Kitchen about this), and they attract plenty of attention themselves in the academic world, especially amongst scholarly publishers and academic libraries.

Bibliometrics are mostly about tracking and measuring citations between journal articles or scholarly publications, so they are essentially all about attention from the academic community. There are things that an author can do in order to attract more attention and citations. Not just “gaming the system” (see a paper on Arxiv about such possibilities) but by reaching as many people as possible, in a way that speaks to them as being relevant to their research and thus worthy of a citation.

Citation, research and writing and publishing practices are evolving: journal articles seem to be citing more other papers these days (well, according to a Nature news item, that’s the way to get more cited: it’s a cycle), and researchers are publishing more journal articles (wikipedia has collated some stats) and engaging in collaborative projects (see this Chronicle of Higher Ed article). If researchers want to stay in their “business” then they will need to adapt to current practices, or to shape them. That’s not easy when it comes to metrics about scholarly outputs, because the ground is shifting beneath their feet. What are the spaces to watch?

How many outputs a researcher produces and in which journal titles or venues matter in the UK, because of the RAE and REF excercises, and the way University research is funded there.

Bibliometrics matter to Universities because of University rankings. Perhaps such rankings should not matter, but they do, and the IoE London blog has an excellent article on the topic. So researchers need to either court each others’ attention and citations, or else create authoritative rankings that don’t use bibliometrics.

Altmetrics represent new ways of measuring attention, but they are like shape-shifting clouds in comparison with bibliometrics. We’re yet to ascertain which measures of which kinds of attention, in which kinds of objects, can tell us what exactly. My own take on altmetrics is that context is the key to using them. Many people are working to understand altmetrics as measures and what they can tell us.

Attention is not a signifier of quality (As researchers well know: Carol Tenopir has done a lot of research on researchers’ reading choices and habits). Work which merits attention can do so for good or bad reasons. Attention can come from many different sources, and can mean different things: by measuring attention exchanges, we can take account of trends within different disciplines and timeframes, and the effect of any “gaming” practices.

Attention from outside of the academic community has potential as “impact”. Of course, context is important again, and for research to achieve “impact” then you’ll need to define exactly what kind of impact you intend to achieve. If you want to reach millions of people for two seconds, or engage with just one person whose life will be hugely enriched or who will have influence over others’ lives, then what you do achieve impact or how you measure your success will be different. But social media and the media can play a part in some definitions of impact, and so altmetrics can help to demonstrate success, since they track attention for your article from these channels.

Next week I’ll be sharing two simple, effective stories of twitter use and reporting on its use.

Challenges for a Brit, living in Germany

I bought a new computer recently, which provided me with plenty of challenges!

My new computer is one of those old fashioned ones that comes with separate parts: my old lap-top is not ideal for working full-time at home, so I decided to get the sort of thing I had in the office, in the past.

I could have tried to buy a hard drive in the UK and have it shipped here: the Internet is great at that. But I like a challenge, so I went to a local shop and picked up a perfectly good hard drive with Windows 7 installed already. I thought it’d be just a case of buying a new language pack to convert it from German to English. I mean, why would Microsoft miss the opportunity to sell me something?! But no, it seems that since they’re now pushing Windows 8, I can’t upgrade to the version of Windows 7 that has language packs… hmm. Just as well I took all of those German lessons last year!

Well, it’s not so bad: I mean, I do live in Germany and I really ought to learn the language, so having my operating system in German will just help me to learn! Well, yes but it slows me down: although I understand most of what the computer is saying to me, sometimes I have to get the dictionary out, and even when I already know the words, my brain processes the information more slowly!

I did order a QWERTY keyboard from the UK, and an ergonomic one at that: I do a lot of writing and I touch-type. German keyboards are nearly the same: they are QWERTZ, and it’s really annoying for me, because I don’t notice until I try to type a “y”! Of course, because my OS is in German, I have to keep reminding my computer that my keyboard is English. It automatically tries to switch it to German for me, from time to time. Helpful if you’re truly bi-lingual, no doubt. And I do sometimes try to write stuff in German, which further confuses my computer!

Because a lot of my work involves MS Word and Excel files from clients, I also chose to purchase MS Office. I couldn’t do it on the Internet, because the Microsoft website kept recognising that my OS was in German and no matter what I did, the German language version landed in my basket. Fortunately, the telephone sales people were able to help me out (in English!) and I got the English software: phew. No doubt I should have been braver and tried to speak German, but I find computer things quite stressful already!

Every time I download software now, I have to tell it that I want English language. Which means that I have to navigate through the menus in German to find the hidden depths where they let me select English. I’ve actually left MS Explorer in German, and find it interesting/educational to get news headlines in German on the MSN homepage: it is of course, also news about Germany. You realise how much of a bubble the news media create for each country, once you’ve escaped one of them!

So that is a few computer and language challenges, but there are a few other big differences that I noticed from the beginning:

Look left, then right!
I don’t drive here (public transport is excellent in Berlin), so I don’t have to remember which side of the road to be on: the other day I was a passenger in a car and it felt weird to be on the wrong side (in my eyes). I still take longer to cross roads: I look about 5 times, just to be sure that I’ve looked properly. Actually, I do that when I visit home too now, in case I’ve become accustomed to continental ways and I’ve not noticed!

The currency is different, of course, and it took me ages to get used to the coins, and not fumble for ages to pay the exact right amount at the till. My tactic of just handing over whole euro amounts to cover the cost did not work: people here are always asking for the exact 2 euro 38 cents, or whatever!

I also had to get used to carrying cash all the time: even though I do have a euro bank card (which took me a while), it’s not so common to pay by card here, to the point that you might not be able to pay (i.e. can’t buy what you need) if you don’t have cash.

Drink & Food (priority order!)

 a mug of tea next to box of yorkshire tea bags
Anyone who knows me won’t be surprised that I import Yorkshire tea bags!

I miss brown sauce (though it is available from specialist shops here), bacon (no, black forest ham is not the same!) and I also found that I’m having to adapt all my favourite recipes. For instance I now use honey in place of golden syrup, although there are other syrups I might try out: I love the bio shops here with all their variety of foods, and no doubt I’m becoming healthier, as I adapt.

I love waking up to sunshine almost every morning. I really was taken by surprise at how much sun there is here. I loved the snowy winters: Berliners are great at clearing it out of the way and getting on with life, although by the end of the winter you do get the feeling that we’re all essentially hibernating until the warmer temperatures arrive!

What took adaptation for me, was that the sun actually made my rosacea really bad. It turns out that I have a sensitivity to sun that I never knew about when living in cloudy England! I have taken to wearing a hat all the time, so that the shade protects my nose (how English I look!). And factor 50 sun cream, and anti-biotic gel at night… now my skin looks normal again. Phew!

The one and only time I’ve really felt homesick, was the sight of a hill-farm country scene on my packet of Yorkshire tea. I didn’t even live in Yorkshire, I don’t understand cricket and I lived in a town all my life, but somehow that scene is quintessentially English and it got to me… Awww!

These are just a handful of the things that use up my emotional energy and brain power! Experiencing such challenges does help one to appreciate what many Library users face, such as all those international students and academics from other countries. It makes life richer, but also slower and sometimes a bit daunting or frustrating!

Openness, replication, validation and the mark of quality in research… and knitting!

Whilst knitting (so I make no claim to have comprehensive notes), I watched a great talk on YouTube by the guy who wrote this paper:

Why most published research findings are false
John P A Ionannidis in PLoS Med (2005)

- Hedge fund managers don’t trust science: how do we know which science can be trusted?

- Looks like replication is an important aspect of science, for us to recognise quality

- Negative results should also be shared and lead to acknowledgement of contribution: there is a particular bias towards reporting of positive results in some disciplines. I think he said: “the analysis planned is different to the analysis published about half of the time” amongst the 60 or so research teams who responded to the author by sending requested protocols. And those who responded must presumably be amongst the most conscientious of researchers: the implication is that those who didn’t respond might publish analysis that is not what was originally planned, in more cases.

- Published articles should have published protocols associated with them, and there are a number of top journals who have now agreed that a condition of publication for articles about randomised control trials should have those trials registered already, before publication.

- Journals might have policies (I think: is this a sign of quality for authors choosing where to publish?), but are they always being adhered to? Not necessarily!

- When small studies’ results are published, the sensible thing is to wait for a larger study to confirm the findings.

- Transparency of data is important too. It sounded like he summarised a study where some top researchers tried to re-do the analysis in 18 papers from a top journal, and they could only replicate the results properly in two articles. There were various problems with the others, which ranged from a lack of availability of the data, through use of home-made and unavailable software, to an un-interpretable description of the methods.

- There are five levels for making research more open and more replicable (and thus more validatable?):

  1. Registration of data
  2. Registration of protocols
  3. Registration of analysis plan
  4. Registration of analysis plan and raw data together
  5. open live streaming

My reflection on it all was that my very act of knitting is a metaphor or even example for all of these themes, as my knitting is a form of replication. The knitting pattern was available for download for free on the knitters’ community site Ravelry, which is like open access publication, although you can buy individual patterns there too, and there’s frustrating, out of print stuff from books and magzines, too! Also, on Ravelry you can see pictures and notes from others’ projects that use the designs. This is partly replication, but also open, post publication peer review, as the project notes sometimes point out errors in the instructions. Sometimes, designers then admit to errors and release new versions. It’s also apparent that some designers have already engaged test knitters to try to avoid such a post-publication revision (pre-publication peer review). Some test-knitters might be paid, some are doing a favour for a friend, and some seem to do it for the wool!

I had difficulty interpreting my pattern in one or two places (perhaps because I was watching a fascinating video at the same time!), and had to fall back on my experience/expertise/creativity.*  But finally, I was able to produce a very nice little top, and is that not a form of replication that indicates the quality of the original designer’s work?

* I was using a lace yarn for a top that was designed for worsted yarn, and my gauge with 5mm needles was close but not perfect, so I was destined for a few modifications. I think that this is somewhat akin to data adjustment! And if it was a really negative result, I could list it on Ravelry as an “Ugh”, so I maybe I should suggest that Nature and Science start publishing “Ugh”s, asap!

Here’s a picture of what I knitted:


SAGE Publications busts “peer review and citation ring,” 60 papers retracted


Important news from the Retraction Watch blog!

Originally posted on Retraction Watch:

This one deserves a “wjvcow.”

SAGE Publishers is retracting 60 articles from the Journal of Vibration and Control after an investigation revealed a “peer review and citation ring” involving a professor in Taiwan.

Here’s the beginning of a statement from SAGE:

View original 3,180 more words

Open Access Publication Gains Acceptance With Authors, Licenses Still Problematic


Always great content at Scholarly Kitchen: very interesting to see what authors want from journal publishers, and that attitudes to altmetrics are fluctuating.

Originally posted on The Scholarly Kitchen:

Taylor and Francis Author Survey Q6A recently published survey of scholarly authors reveals a growing acceptance of the benefits of open access publication, yet authors are still wary of unfettered and commercial reuse of their work.

The 2014 Taylor & Francis Open Access Survey updates and expands upon their 2013 study. Juxtaposing the results of both surveys allowed the researchers to identify potential trends over time. We should reserve a little caution with interpreting some of the changes since there may be evidence of sampling or response bias between the two surveys.

For example, while the 2014 survey received nearly 8,000 responses (a 9% response rate), it was nearly half of the response size of the 2013 survey, which reported a 19% response rate. The demographics of the two response groups also appears to be somewhat dissimilar. Compared to 2013, 2014 respondents were measurably younger (median age 43 versus 46), included more women (39% versus 35%), far fewer full professors (20% versus…

View original 677 more words

Curating online content and recording information sources: tools I’ve used

I’ve mentioned in an earlier blog-post that the tool I value most for this at the moment, is Evernote. But there are some other tools I’ve had a good look at:

ScoopIt is also a pretty good curation tool, and if you use it often to discover content and link it up with Twitter (there’s bound to be an IFTT recipe), you can look more original on Twitter at the same time as creating something more visually attractive and useful for yourself than you could do with Twitter alone. The problem I’ve discovered is that your ScoopIt stories look out of date pretty quickly if it’s not a primary tool for you, and I can’t vouch for it being the best place to discover content: a better way to use it might be to investigate the bookmarklet tool.

Another such tool that I’m aware of is, largely because of one particular user who picks up on my tweets and reports on them there & tweets at me to alert/acknowledge them, which is a pretty nice, social way to curate/collate content and report on it.

I used Storify for collecting tweets relating to the Finch report on open access, and I still refer to the collection from time to time. I think Storify is particularly good at collecting tweets about a particular theme, but you can also use it to collect websites and material from other sources. Apparently, Storify also has a bookmarklet tool which I would use if I intended to invest more in Storify.

I also created a collection (or two) of academic papers on EndNote when I was at Warwick, and I exported and then imported the bibliographic data into Mendeley, for future reference. The reason I don’t use either Mendeley or EndNote so much these days is really that I’m not using so much scholarly content. If I were, I’d also want to investigate Zotero as an alternative: it’s a long time since I investigated it but it has a good reputation amongst researchers I’ve met. I note that EndNote’s Desktop version seems to remain the best at re-formatting your bibliographic data into the various styles for journal publication.

I used to use Delicious for website bookmarks but when it lost some features that I valued, I migrated my bookmark collection over to Diigo. Both of these tools, like Evernote, have handy content-adding tools for my browser toolbar (bookmarklets). My Diigo collection is nicely tagged but not maintained so much these days, because I prefer the way Evernote copies the content of sites. I once spent some considerable time weeding out dead links from my bookmarks, so it seems to me better to have a copy of content for future reference, in case the original webpage is moved/removed: apparently, Pocket can also do this.

Overall, the convenience of Evernote prevails, for me. It’s apparently a “productivity” tool and not only for content curation, although that’s how I use it at present: I know it’s more powerful. (I’m sensing that “productivity” is a keyword for folks at companies who provide these tools, especially Mendeley in their recent webinar for Librarians.)

Brian Kelly’s blog post on Evernote from Jan 2014 compares it to Simplenote, explaining why he’s sticking with Evernote. And if you want to explore productivity tools further, you could do worse than looking at the Libguide from the University of Minnesota on “Digital Academic Workflow tools”.