My favourite social media “rules”

If you’re thinking of creating your own social media strategy (or updating an existing one), then you could do worse than read through these 80 “rules”. It seems aimed at companies using social media for financial gain, and some of the advice seems suitable to those building social media tools. A lot of it is focussed on the role of the audience or tool users, and much of it is just good advice for us all. Here are excerpts from a few of my favourites:

  • No. 9. “Go wherever your audience is”: So, choose Twitter or Facebook, or Google+, according to the people who you want to reach.
  • No. 12. “Update your page or delete it”: Easier said than done, but definitely good advice!
  • No. 23. “Just because you can measure everything doesn’t mean that you should”. They also develop the point to say that “likes and mentions look good on a report, but will not keep you in a job”. They suggest that ROI (Return on Investment), or NPS (Net Promoter Score) will, but perhaps your own job will depend on other criteria!
  • No. 24 Social media is not cheap or easy. (It later explains in rule 74 that Gangnam Style was a carefully planned success, rather than a viral success!)
  • No. 42 “If fans start publishing and sharing your content without permission, offer to help”

Finally, lots of these rules seem to say that it’s all about speed, not perfection, and that you should have a higher purpose. To paraphrase: get stuff out there, and make sure it’s going to make lives easier, happier, or more rewarding!

 

A webinar called “Mastering motivation: the neuroscience of engagement and collaboration”

I watched a recorded webinar over lunch the other day, and it became an extended lunch as I took notes for this blogpost. The speaker is Michael Bungay Stanier and he seems to be a leadership coach or consultant to companies. I found the webinar title interesting: researchers are often sceptical of management training, but advice that is based on scientific research must surely appeal!

I’d have liked more linking and references to neuroscience research but it isn’t really about that. It’s about four factors that can influence our brain’s degree of comfort and thereby increase our engagement and collaboration with each other. Much of the webinar is about how we can take control of those factors, and those tips don’t seem to come from neuroscience but are common sense, and familiar to me from other management training that I’ve taken part in. So it’s good, but not what I expected.

Here is my summary of the webinar:

Neuroscience is the study of how the brain works. It tells us how people’s brains are reacting to questions or tests, and we can draw some conclusions from that.

Neuroscience tells us that the human brain needs to feel that things are safe: we aren’t aware of it at a conscious level but the brain is running a programme in the background that is constantly checking safety, and it will lead you away from risky and dangerous things. So it is important that we make our environment feel safe, to reassure our “lizard” or primitive brains. (Entrepreneurs may be able to review situations and see them as less risky than others.)

Michael identifies 4 factors that we can influence, to make the brain feel safe (Nice abbreviation: TERA).

Tribe – In the company of others, your brain is asking: “Are you with me or against me?” So we can try to increase this sense of belonging to the same tribe: tips include smiling, laughing together, small talk at a virtual meeting (Ask people to share their high point of the last week.) and other tactics for achieving rapport and empathy.  Suggests defining a common goal or a common enemy!

Expectations – Your brain is asking: “Do I know what’s happening, can I predict what will happen?” If it’s really obvious what will happen, then the brain feels more comfortable, but if it’s too comfortable then you will get bored and distracted. Setting an agenda is important for a meeting. Be clear about timing and outcomes when talking about things: eg let’s talk about this for five minutes, and in that time we’ll try to come up with x y z. An agenda doesn’t have to be standard, or set before the meeting. We should start a meeting by setting the agenda together: “What are the key decisions we need to make?” Ask a different question at the start of each meeting, to keep things fresh.

Rank – People feel more comfortable if they are high status, or more threatened, if they feel of lower rank. The sense of rank can be influenced.

  • If you are of lower rank and want to increase it: stand up to face the rest of the meeting, when speaking. If you have a question and want to seek help: consider asking yourself first. (See below, the way to answer your question with other questions!)
  • If you are of higher rank and want to make others feel more comfortable: talk at the same level as others, and perhaps sit at 90 degrees to them rather than directly opposite. Praise people. Learn and use names. Listen to each other! Let others go first. If someone asks a question of you and you just give your advice/answer, then you increase your status, but if you respond by saying “that’s a great question, what ideas do you already have”, then you can increase their status. Then ask them, “what else?” Beware of sounding patronising: tone is important, so be genuinely interested in the other person’s answers.

Autonomy – What are the small decisions you can get others to make, rather than you making? Increase reports’ sense of autonomy, and give yourself a break from working so hard! Decide agenda together.

At the beginning, Michael asks you to think of someone who you are trying to manage/lead/collaborate with, and apply this theory. What’s very important to you, in this setting, and what’s least important? And what is important to the other person? At the end, he asks if what is important is the same for both parties. 71% of the people who responded to the poll in the live webinar said that no, it wasn’t the same. Being aware of this might make you do things differently. He asks what two things will you do differently now that you know this?

My two things:

  • Try not to automatically, always answer questions that are asked of me.
  • Start meetings a little more slowly: I’m always eager to get stuck in!

Amongst the discussion at the end, there are lots of tips on how to handle lateness at meetings. And another key phrase I picked up on is that sometimes we have to “pick our battles”. So true!

Old fashioned but active online groups: e-mail lists!

As a Librarian, there are many e-mail lists that look interesting to me. In the past, I’ve been an active member of such lists, answering questions from peers and indeed asking questions of them. At their best, they’re more than a place to watch for information, but an active forum for discussion and sharing of expertise and good practice.

There are three main sources that I’ve identified: take a look and see if you find something interesting. My tip is to look at the archives of a list that seems good, to see how much activity it has: is this level what you are looking for?

Are there other lists of e-mail lists that you recommend for librarians?

Further thoughts on Peer Review & speeding up traditional journal publication

Back in January, I wrote about Peer Review. It’s a big topic! Here are some more reflections, following on from my last blog post about it.

Speeding things up, in journal article publication. (On “Peer review takes a very long time”)

picture of a pocket watch

I wrote that peer review “takes a very long time” because many scholars want to get their work out there to be read, as soon as possible. Of course, this is a loose concept and “a very long time” is relative. Some might think that I am criticising publishers for being slow, but I’m not pointing the finger of blame! I know that publishers have been addressing the issue and peer review has sped up in recent times, especially since there is now software to can help track it: SPARC has a handy round-up of manuscript submission software. However, the peer reviewers themselves must respond and they are under a lot of pressure. The system can only be as fast as the slowest reviewer, and there are all sorts of (entirely understandable) circumstances that might slow an individual down.

I should take a look at some of the developments that have helped to speed up traditional scholarly communication, though:

Scholarly publishers have invested in initiatives like Sage’s OnlineFirst to help peer reviewed research articles to reach audiences before journal issues are complete, thus cutting publication waiting periods.

Some publishers have also introduced mega journals with cascading peer review systems, which are also often based on Gold Open Access. Impact Story’s blog has a great post about how authors can make the most of these types of journal.  These speed up an article’s time to publication because after a peer review that led to rejection from one title, your paper can get fast-tracked through to publication in the next “tier” title at the same publisher, without the need to submit again and start the process from the very beginning.

And of course, as a librarian I should mention the sophisticated alerting services that help researchers to find out about each others’ papers as soon as possible: researchers are no longer dependent on the print copy landing on their desk, and finding the time to browse through the table of contents!

Putting it online yourself is quicker: why not try that?

Some research repositories might take non-peer-reviewed content, and in theory, authors could always put a copy of their work on a personal web-page before peer review if they’re confident in it and just want it out there. There are disciplinary differences in authors’ reactions to this idea. This article in PLOS Biology makes the case for the biology community following in the footsteps of physics, in using pre-print servers to share such early versions. Its authors point out that there are benefits to doing this, including:

Posting manuscripts as preprints also has the potential to improve the quality of science by allowing prepublication feedback from a large pool of reviewers.

Many authors would not share their early manuscripts in this way, because they value peer review as a process of polishing their work. I think this is a reason for peer review to take place in the open, because then it becomes apparent just how important a contribution a peer reviewer might have made to a paper. As I said in my previous post, peer reviewers should get credit for their work, but perhaps I should have made it clear that I’m not talking about it looking good on their CV, or their peer review activity going down well with their Head of Department!

 

Even authors who are happy to share un-polished pre-peer-review versions of their work (aka pre-prints, aka manuscripts) might be wary if it is not the norm in their discipline, because it might prejudice their chances of publication in the big-name journals of their field. Authors will likely have to agree to clauses stating that the work has not previously been published elsewhere. When I worked at the University of Warwick, in the early days of their institutional repository we surveyed a number of big publishers to ask if they would consider repository deposit to constitute prior publication, and thus a breach of this kind of clause in their authors’ agreement. Some said yes, some said no.

This is not such a clear area for authors, and for many it’s not worth the time of enquiring or the risk of finding out the hard way, i.e. through rejection of their article because plagiarism detection software identifies it as previously published online. Researchers need the quality “badge” that a journal gives them, for their CV and their institution’s performance review processes: publishing articles is not all about communication to other researchers, but it is also about kudos.

 

For some authors therefore (I would guess most), the earliest version they might share would be a post-peer-review version (sometimes called a post-print, sometimes called an author’s final version), which if there are no embargo periods from the publisher, would become available at the same time as their article became available through an OnlineFirst scheme.

 

 

Post peer review: commentary and altmetrics

I mentioned post publication peer review in my previous post: I thought about it as an alternative to peer review then, and perhaps I should think about it more as something that is complementary to peer review. Perhaps peer review doesn’t need to be either traditional or post publication but it is already really a process that doesn’t end with publication.

 

There are many ways that researchers are sharing and commenting on each others’ work after it has been published, therefore after the peer review process for traditional articles. We can track these interactions on sites like Researchgate and Mendeley, and through altmetrics software that collates data on such interactions… but altmetrics and its role is a subject I’ve looked at separately already, and it’s one I’m likely to return to again later!

How to spend 30 effective minutes on social media

I came across a great blog post by Kevan Lee on Buffer that outlines all the kinds of activities you could be doing on social media, and provides different types of plan for how to use 30 minutes, on social media. (There’s quite a bit of good advice over on Buffer, if you’ve got time to read around.)

This particular post helped me to reflect on my social media mini-strategy that I wrote about in May last year, along with the work I’m now doing for Piirus, managing their blog. I recognised that what I do personally with social media, is rather different to what I do for Piirus. The kinds of activities that I focus on for myself, from the list in the Buffer blog post are: Curating, Crafting and Experimenting. I keep wishing that I was more social but I can’t do everything! I focus on my online profile, and on learning.

However, when I’m working for Piirus, the way I’d spend that 30 minutes is to follow this recipe from Kevan’s blog post:

How to spend the 30 minutes:

  • 5 minutes rescheduling popular content
  • 15 minutes queueing content from your go-to sources
  • 10 minutes responding to mentions on social media

This is pretty much a daily activity for me, on behalf of Piirus, although some days I take less than 30 minutes. Other days, I spend more time, and take a look at some analytics (so that I know what is popular content) or I look for events and ways to engage.

Which “recipe” for 30 minutes might you use, and which activities do you invest most time on in Social media? Reflecting on this blog post might help you to identify the strategy you are already following, or the one which you might wish to follow.

Peer review of journal articles: how good is it really? A librarian evaluates an evaluation system, for scholarly information sources.

Peer review is a signifier of quality in the scholarly world: it’s what librarians (like me) teach students to look out for, when evaluating information sources. In this blog post, I explore some of the uses, criticisms and new developments in the arena of scholarly peer reviewing and filtering for quality. My evaluation of this evaluation system is fairly informal, but I’ve provided lots of useful links.

What is peer review?

It varies from one process to the next, but ideally, scholarly journal articles are chosen and polished for publication by a number of other scholars or peers in a process known as peer review, or sometimes called refereeing. Sometimes only two reviewers are used per article, sometimes three are used, plus of course the journal editor and editorial board have roles in shaping what sort of content is accepted in the journal.

Sometimes the process is “double-blind”, in that the reviewers don’t know who the author(s) are, nor the authors know who the reviewers are and sometimes it is only “blind” in that the author(s) don’t know who the reviewers are. In this way, the reviewers can be critical without fearing that they might suffer negative career consequences.

However, one problem with peer review worth noting here (although not explored below) is that peer reviewers criticisms can often be brutal because they are made under the protection of anonymity. I also think that the time pressures mean that peer reviewers don’t phrase their thoughts “nicely” because it simply takes too long and they don’t have such time to invest.

Double-blind reviewing is not always possible: it can be difficult to disguise authors’ identity since the research described in the paper might be known to peers, for example when only one or two labs have the specialist equipment used.

There’s more information on peer review over on the PhD Life blog, which explains what reviewers might be looking for and the possible outcomes of peer review. It also explains some of the other quality-related processes associated with scholarly journal publishing, such as corrections and retractions.

Peer review happens in other contexts too, such as the UK’s REF which has been heavily criticised as not the “gold standard” that it should be, because reviews of outputs were carried out by only British scholars, and that a paper might be read by only one reviewer in this process.

Another frequent peer review process is when research funding bids are reviewed and grants are awarded: panels are often made up of peers. I’ve done this and it’s a valuable experience that helps you to hit the right note in your own future funding applications, but it is also hard work, to read all the bids and try to do them all justice.

It sounds good, so why ask how good it is?

Journal publishing is always growing, and peer review is under pressure. A recent scam involving peer reviewing your own papers and its discovery is described by the Ottowa Citizen. Every year I read about papers that have been published in spite of journals’ quality filters. The Retraction watch website highlights stories of published scholarly articles that journals have retracted, i.e. the research findings described are not reliable.

Here are some of the flaws of the peer review process, in relation to journal articles.

1) It takes a very long time

I sense frustration about long journal turnaround times and peer review takes up quite a lot of that turnaround time. When you think about how much pressure there is on academics to write and to publish, how they get little recognition and no financial compensation for participating in the peer review process, how it is important to be seen to be the first to publish on something, and how scholarly work can be sooner built upon when it is published more quickly, it is no surprise to me that review times are not so fast.

2) It’s not efficient

If you submit to one journal and are peer reviewed and then rejected, you can then submit to another journal which might also put your article forward for peer review. Some people might call this redundant reviewing (since the work has already been done!) and it does add to the time-lag before research can be published and shared. As a response, there have been attempts to share reviewed papers, such as when your paper is rejected from one journal but it is suggested that you submit to another journal title by the same publisher instead.

3) Peers themselves get no credit or compensation for their work

There is a service called Rubriq that tries to address this criticism, and all of my points above. They offer a service to authors of having their papers independently reviewed, for a fee. They track the reviewers work in a way that allows them to demonstrate their contribution to the field through reviewing, and they also pay a fee to the reviewers, although this can also be waived by reviewers who can’t earn this way, and it is not thought to be the full value of the input supplied by reviewers.

Authors often suggest appropriate reviewers anyway, so if they supply an already reviewed paper to a journal, perhaps the editor might accept the process from this independent company. Rubriq have a network of journals that they work with.

4) Some articles don’t even reach peer review

A recent piece in Nature News summarises findings of research indicating that whilst journals are good at filtering out poor quality articles through peer review, the journals themselves were not so good at identifying the long-term highest cited papers. 12 out of the 15 most cited papers involved in the study were rejected at first, before finally making it to publication. Perhaps this is because, after rejection by peer review, articles were improved and re-submitted, so the system is working, although I think that the peer reviewers in such instances deserve credit for their contribution. However, this is to assume that the higher cited articles are in fact higher quality, which is not necessarily the case. (See below for a brief consideration of citations and bibliometrics.)

Rejection after peer review is one scenario. The other is also often called “desk rejection”, where an editor chooses which articles are rejected straight away, and which are sent to peer review. Editors might be basing their decisions on criteria like relevance to the journal’s readership, or compliance to the journal’s guidelines and not always on the quality of the research.

The message that I take from this is that authors whose papers are rejected can take heart, and keep improving their paper, and keep trying to get accepted for publication, but in trying to please editors and peer reviewers, we are potentially reinforcing biases.

5) Negative results are not published and not shared

This is another case of biases being perpetuated. There are concerns about the loss to scientific knowledge of negative findings, when a hypothesis was tested but not found to be proven. Such findings rarely make it into publication, because what journal editors and peer reviewers seek to publish is research which makes a high impact on scientific knowledge. And yet, if negative results are not reported then there is a risk that other researchers will explore in the same way and thus waste resources. Also, if research is replicated but not proven, this is potentially valuable to science because it could be that the already published work needs correcting. But the odds are stacked in favour of the original publication (it was already peer reviewed and accepted, after all), such that the replication might not be published. Science needs to be able to accommodate corrections, as the article I’ve linked to explains, and one response has been the emergence of journals of negative results.

What are the alternatives to traditional peer review?

I don’t suppose that my list is comprehensive, but it highlights things that I’ve come across recently and frequently, in this context.

John Iaonnides has written that most published research findings are false, and one answer could be replication. A measure based on replication could be useful to indicate the quality of research. But who wants to reproduce others’ research when all the glory (citations, research funding, stable employment) is in making new discoveries? And it’s not simple to replicate others’ studies: we’re often talking about years of work and investigation, using expensive and sophisticated machinery and quite often there will be different variables involved so for some research, it can never be quite an exact replication.

Post-publication peer review is another possible way to mark research out as high quality. I really like what F1000 are doing, and they explain more about the different ways that articles can be peer reviewed after having been published. I’m not sure that I want to rely on anonymous comments fields, although of course they can bring concerns to light and this is only one kind of “peer review”. I use quotation marks, because if the comments are anonymous, how do you know that they are from peers? But if the peer reviewers and their work are attributed, then I find this to be a really interesting way forward, because one of the pressures on peer review is the lack of acknowledgement, and the removal of anonymity is one way to do this.

I like the concept of articles being recommended into the F1000Prime collection: this is almost like creating a library, except that it’s not a librarian who is a filter but a scholarly community. In fact, many librarians’ selections come from suggestions by scholars anyway, so this is part way to a digital library. (Although I believe quite firmly that it is not a library, not least because access to the recommendations is restricted to paying members.) Anyway, a recommendation from a trusted source is another way to filter for quality. The issue then becomes, which sources do you trust? I blogged recently about recommendation systems that are used in more commercial settings.

I have to mention metrics! I’ll start with bibliometrics, which is usually measuring or scoring that relates to citations between journal articles or papers. For many, this is a controversial measure because there are many reasons why a paper might be cited, and not all of those reasons mean that the paper itself is of high quality. And indeed, there are many high quality papers which might not be highly cited, because their time has not yet come or because their contribution is to a field in which article publication and citation are not such common practice. The enormous growth in scholarly publication has meant that citation indices might also be criticised for too narrow a coverage,

In general, in the lead up to REF2014, researchers in the UK were keen not to be measured by bibliometrics, preferring to trust in peer review panels as a better way to evaluate their research. Yet citation indices allow you to order your search results by “most highly cited”. Would they do this if there was no interest in it as a measure of quality? Carol Tenopir has done some really interesting work in this area.

If you think that bibliometrics are controversial then altmetrics have provided some of the juiciest criticisms of all, being described as attention metrics. Yes, altmetrics as a “score” can be easily gamed. No, I don’t think that we should take the number of Facebook “likes” (or worse, a score based upon those and/or other such measures which is calculated in a mysterious way) to be an indicator of the quality of someone’s research. But, I think that reactions and responses to a published research article, as tracked by altmetric tools, can be enormously useful to the authors themselves. I’ve written about this already. Altmetrics require appropriate human interpretation: pay the scores too much attention and you will miss the real treasures that other people have also missed.

So how good is peer review, really?

It is a gold standard. It is what publishers do when time and resources allow. But it is not perfect and it is under pressure, and I’m really intrigued and impressed by all the innovative ways to ensure and indicate quality that are being explored. Of all the alternatives that I’ve discussed here, I’m most keen on the notion of open peer review, where it is not anonymous but accredited. This might be post publication or pre publication, but I’m keen that we should be able to follow peer reviewers’ and editors’ work.

A lot of these changes to scholarly publishing in the digital era seem to me to mean that the librarian’s role as a filter of information is pretty much at an end. But our role as a guide to sources and instructor of information literacy is ever more important. I would still teach budding researchers to consider peer reviewed works to be more likely to be high quality, but I would also say that they should apply their subject knowledge when reading the paper, and they should look out for other signs of quality or lack thereof. Peer review (and how rigorous it is) is one of a number of clues, and in that sense, nothing much has changed for librarians teaching information literacy, but we do have some interesting new clues to tell our students to watch out for.

How do you assess the quality of recommendations?

I wrote here last year about the marvellous Fishscale of academicness, as a great way to teach students information literacy skills by starting with how evaluate what they’ve found.  I’m currently teaching information ethics to Masters students at Humboldt Uni, and this week’s theme is “Trust”: it touches on all sorts of interesting topics in this area, including recommendation systems, also known as recommendation engines.

An example of such a recommendation system in action would be the customer star ratings for products on Amazon, which are averaged out and may be used as a way to suggest further purchases to customers, amongst other information. Or reviews for hotels/cafes on Tripadvisor, film suggestions on Netflix, etc. Recommendations are everywhere these days: Facebook recommends apps you might like, and will suggest “people you may know” : LinkedIn and Twitter work in similar ways.

For me, these recommendations beg certain questions, which also turn up in debates about privacy and about altmetrics, such as:

How much information do you have to give them about yourself, do you trust them with it, and how good are their recommendations anyway? Are you happy to be influenced by what others have done/said online?

Recommendation systems use “relevance” algorithms, which are similar to those used when you perform a search. They might combine a number of factors, including:

  • Items you’ve already interacted with (i.e. suggesting similar items, called an item-to-item approach)
  • User-to-user: it finds people who are similar to you, eg they have displayed similar choices to you already, and suggests things based on their choices
  • Popularity of items (eg Facebook recommends apps to you depending on how much use they’ve had) Note that this may have to be balanced against novelty: new items will necessarily not have achieved high popularity.
  • Ratings from other users/customers (here, they might weight certain users’ scores more heavily, or average star ratings, or just preference items with a review)
  • Information that they already have about you, against a profile of what such a person might like (eg information gleaned from tracking you online through your browser or on your user profile on their site, or that you have given them in some way)

The sophistication of the algorithm used and the size of the data pool drawn on (or lack thereof) might also depend on the need for speed of the system.

Naturally, those working on recommendation engines have given quite a bit of consideration to how they might evaluate the recommendations given, as this paper from Microsoft discusses, in a relatively accessible way. It introduces many relevant concepts, such as the notion that recommending things that it knows you’ve already seen will increase your trust in the recommendations, although it is very difficult to measure trust in a test situation.

We see that human evaluation of these recommendation systems is important as “click through rate (CTR)” is so easily manipulated and inadequate as a measure of the usefulness of recommendations, as described and illustrated in this blog post by Edwin Chen.

Which recommendations do you value, and why? I also came across a review of movie recommendation sites from 2009, which explains why certain sites were preferred, which gives plenty of food for thought. From my reading and experience, I’d start my list of the kind of things that I’d like from recommendation systems with:

  • It doesn’t take information about me without asking me first (lots of sites now have to tell you about cookies, as the Cookie collective explain)
  • It uses a minimal amount of information that I’ve given it (and doesn’t link with other sites/services I’ve used, to either pull in or push out data about me, unless I tell it that it can!)
  • Suggestions are relevant to my original interest, but with the odd curveball thrown in, to support a more serendipitous discovery and to help me break out of the “filter bubble
  • Suggestions feature a review that was written by a person (in a language that I speak), so more than just a star rating
  • Suggestions are linked in a way that allows me to surf and explore further, eg filtering for items that match one particular characteristic that I like from the recommendation
  • I don’t want the suggestions to be too creepily accurate: I like to think I’ve made a discovery for myself, and I doubt the trustworthiness of a company that knows too much about me!

I’m sure there’s more, but I’m equally sure that we all want something slightly different from recommendation systems! My correspondence with Alke Groeppel-Wegener suggests that her students are very keen on relevance and not so interested in serendipity. For me, if that relevance comes at the expense of my privacy, so that I have to give the system lots of information about myself, then I definitely don’t want it. What about you?

What use is social media to a researcher? Find out at a Google Hangout event

I’m very pleased to be taking part as a panellist in an online Q&A session called “How to be a successful digital academic to boost your career.” It takes place on 27th Jan at 12 noon, GMT and is hosted by none other than the Thesis Whisperer, Dr Inger Mewburn!

We’ll be exploring the theme of social media and its usefulness to academics. Do you think social media is useful, or do you wonder how you could possibly make use of it, as a researcher? I’m sure that the expert panel will have some ideas of interest to you! Themes of online engagement through blogs, as well as writing for online audiences are bound to emerge, in addition to digital networking.

I was invited in my capacity as editor of the Piirus blog, and I’m sure I’ll explain a little bit about how Piirus differs from other online tools. It’s more of an online dating or introductions agent, and its extremely light touch. Its purpose is to help researchers make connections beyond their disciplines and beyond national borders. It also comes from the academic community itself, and is based at the University of Warwick alongside jobs.ac.uk, the hosts of the Google hangout event.

If you’ve never attended such an event online before, well they are something like a webinar, and something like a live conference panel session. You get to type in questions to the host, who will pass them on to the panellists. You can even send in questions in advance. During the event, you can sign in and then see and hear the panellists discussing the questions. If you can’t attend the event live, well no worries: it will be recorded so that you can watch it later.

There is a lot more information about it, over on the event page on Google+. I hope you find it valuable!

A quick way to save time, online: A browser for privacy

Here is my tip for 2015: Install a browser that is specialised for privacy, such as the Epic Privacy browser. I find it much quicker than editing settings on other browsers, and it really does lead to faster webpage download times. Tracking information is not passed on to the websites that you view and adverts are not personalised, and that’s why it’s quicker, I believe.

Happy New Year to one and all!

 

Alerts are really helpful and (alt)metrics are interesting but academic communities are key to building new knowledge.

Some time ago, I set Google Scholar to alert me if anyone cited one of the papers I’ve authored. I recommend that academic authors should do this on Scopus and Web of Science too. I forgot all about it until yesterday, when an alert duly popped into my e-mail.

It is gratifying to see that someone has cited you (& perhaps an occasional reminder to update the h-index on your CV), but more importantly, it alerts you to papers in your area of interest. This is the paper I was alerted to:

José Luis Ortega (2015) Relationship between altmetric and bibliometric indicators across academic social sites: The case of CSIC’s members. Journal of Informetrics, Volume 9, Issue 1, Pages 39-49 doi: 10.1016/j.joi.2014.11.004

I don’t have a subscription to ScienceDirect so I couldn’t read it in full there, but it does tell me the research highlights:

  • Usage and social indicators depend on their own social sites.
  • Bibliometric indices are independent and therefore more stable across services.
  • Correlations between social and usage metrics regarding bibliometric ones are poor.
  • Altmetrics could not be a proxy for research evaluation, but for social impact of science.

and of course, the abstract:

This study explores the connections between social and usage metrics (altmetrics) and bibliometric indicators at the author level. It studies to what extent these indicators, gained from academic sites, can provide a proxy for research impact. Close to 10,000 author profiles belonging to the Spanish National Research Council were extracted from the principal scholarly social sites: ResearchGate, Academia.edu and Mendeley and academic search engines: Microsoft Academic Search and Google Scholar Citations. Results describe little overlapping between sites because most of the researchers only manage one profile (72%). Correlations point out that there is scant relationship between altmetric and bibliometric indicators at author level. This is due to the almetric ones are site-dependent, while the bibliometric ones are more stable across web sites. It is concluded that altmetrics could reflect an alternative dimension of the research performance, close, perhaps, to science popularization and networking abilities, but far from citation impact.

I found a fuller version of the paper on Academia.edu and it is indeed an interesting read. I’ve read other papers that look specifically at altmetric and bibliometric scores for one particular journal’s articles, or articles from within one discipline. I like the larger scale of this study, and the conclusions make sense to me.

And my paper that it cites? A co-authored one that Brian Kelly presented at the Open Repositories 2012 conference.

Kelly, B., & Delasalle, J. (2012). Can LinkedIn and Academia.edu Enhance Access to Open Repositories? In: OR2012: the 7th International Conference on OpenRepositories, Edinburgh, Scotland.

It is also a paper that is on Academia.edu. I wonder if that’s partly why it was discovered and cited? The alt- and biblio-metrics for that paper are not likely to be high (I think of it as embryonic work, for others to build on), but participation in an online community is still a way to spread the word about what you’ve investigated and found, just like attending a conference.

Hence the title of this blog post. I find the alert useful to keep my knowledge up to date, and the citation gives me a sense of being part of the academic community, which is why I find metrics so interesting. What they tell the authors themselves is of value, beyond any performance measurement or quality signalling aspects.