Catching up and concentrating on clients: the life of a freelancer

I’m looong overdue a blogpost: last time I wrote here it was about an Elsevier event in November! I’ve not been quiet exactly, just busy working for my clients. So I thought I’d share a quick look at some of my recent/current work.

Next week, I’m excited to be going to the OCLC EMEA Regional meeting. It happens to be in Berlin and OCLC are a relatively new client. I’m going along to listen for soundbites by the speakers and we’ll see what I write about, based on the inspiring talks in the programme.

I’m always busy working on the social media output for, based at the University of Warwick alongside the more famous We’ve got a great team of correspondents who share the role of “tweeter” each day, and our blogposts cover a lot of themes relevant to the early career researcher, from time management and networking, to career paths and academic consultancy. We’ve got more good stuff in the pipeline, so if you’re looking for a blog to read then keep an eye out over there.

A hand with a pen, writing on paper at a desk

Since I became freelance, I’ve been regularly writing book back covers and unique selling points for SpringerNature: this is fascinating work as the books come from across the disciplines, from maths and philosophy to physics and health sciences. I get a little insight into some of the excellent research that is being published and learn something new with every book that I write for.

And I’ve been teaching the Information Ethics & Legal Aspects module at Humboldt University again, this last semester. I really like teaching it as I can see how it gives the students food for thought. And it’s so topical: we always find stories in the news to illustrate our themes. The students had their oral exam last week and I’m pleased to say that they all passed!

Well, those have been the main, big projects recently. Those and the tax reporting 😉

Image credit: CC0, via Pixabay

How to close your blog gracefully.

I wrote this a while ago but it went live at a very busy time so only now am I really getting around to promoting and sharing it. I am very privileged to have featured as a guest blogger on the Thesis Whisperer blog: it’s a blog that I often like to read! Anyway, read on for my collated experience and observations about closing blogs…

The Thesis Whisperer

This post is by Jenny Delasalle, a blogger and freelance blog manager for the Piirus blog, amongst many roles, past and present. Piirus is an online, research collaboration matching service that is provided to the international research community by the University of Warwick, UK, and it aims to support researchers through its blog as well as introducing you to each other. Here, Jenny looks into a theme which she confesses she’s got wrong herself sometimes: some ways to quit blogging!

Screen Shot 2016-02-21 at 11.18.29 amThere are lots of great reasons to blog, but are also sometimes reasons to stop. You might not be getting benefits from your blog any more, or your interests might change. Maybe you’ve ‘inherited’ a blog along with a new job, but blogging isn’t your style. Blogging is potentially an endless commitment, so choosing how and when to stop is difficult and there’s not much advice out…

View original post 918 more words

A useful tool for librarians: metrics knowledge in bite-sized pieces By Jenny Delasalle

Here is a guest blogpost that I wrote for the new, very interesting Bibliomagician blog.

the Bibliomagician

Metrics_poster_verticalHaving worked in UK academic libraries for 15 years before becoming freelance, I saw the rise and rise of citation counting (although as Geoffrey Bilder points out, it should rightly be called reference counting). Such counting, I learnt, was called “bibliometrics”. The very name sounds like something that librarians should be interested in if not expert at, and so I delved into what they were and how they might help me and also the users of academic libraries. It began with the need to select which journals to subscribe to, and it became a filter for readers to select which papers to read. Somewhere along the road, it became a measurement of individual researchers, and a component of university rankings: such metrics were gaining attention.

Then along came altmetrics, offering tantalising glimpses of something more than the numbers: real stories of impact that could be found through online tracking. Context…

View original post 880 more words

Two events this week: one in Berlin, one on Twitter for #ECRchat

Busy times here as term is underway at Humboldt University and as well as teaching on Wednesdays, today is the day that I present with my co-tutor at Humboldt’s School of Library & Information science, as part of the BBK series about how we teach our Information Ethics module, and why Berlin is a suitable place for our topic.

And Thursday is the day of a long-awaited #ECRchat on Networking and opportunities in the third and public sector at 11am UK time. ECRchat is an event/chat in Twitter itself, using the hashtag #ECRchat. If you’re not already used to hashtag events Twitter, then the easiest way to follow the event would be to look on the Piirus blogpost that I linked to above, at the time of the chat. Or to wait until a Storify summary is announced on the #ECRchat channel.

I am also full of inspiration from last week’s Frankfurt book fair, but you’ll have to wait for me blog about it because I obviously have a lot of things on at the moment!

Is the “Data-Librarian” the Future of Library Science?

Next week I’ll be at the Frankfurt book fair! I’m going to be on a panel at an event with this title. If you’ll also be at the fair and fancy hearing me speak, then here are the details:

Thursday, October 15th, 10-10:30 a.m., on the Hot Spot Professional & Scientific Information stage in hall 4.2 (Frankfurt Book Fair)

It’s on the perennial theme of the changing role of the librarian, this time looking at the difference that data makes. I’ll be drawing on my experience of working in libraries in the UK, and of course of training information professionals of the future at Humboldt University. Without giving the plot away too much, my perspective is that librarians have always done many different roles but it’s our professional training, self-identification with the profession and use of all its experience in matters like ethics and customer service that makes us librarians, and thus a part of a profession. The “data librarian” will just be one of many different flavours of librarian in the future. I myself, am a peculiar “flavour”: A librarian without a library 😉

I’m looking forward to meeting my fellow panellists and to discussing in more detail how data might affect the future role of the librarian. And I hope to see you there!

The Connect-the-Dots Revelation – Revealing Hidden Academic Practice

I’m sharing this post by Alke Groppel-Wegener, who is also author of the Fishscale of Academicness that I like so much. Please have a think about supporting her book on Kickstarter: if ever met a visual thinking student who struggles to write essays, then you will know how helpful this book could be!

Tactile Academia

As you might know, I am currently putting together a workbook for students that collects some of the visual analogies I have been using in my teaching. I have been getting some questions about what is meant by ‘visual analogies’ and how that would translate into a book on academic writing as part of my Kickstarter campaign to raise funds to print some copies (and until the 7th May 2015 you can support this by pledging for your very own copy here). So in order to give people a better idea, here is the introduction (I will add a picture of my layout soon):

Here’s the trouble with writing academic essays at degree level: if you haven’t been to university before, you probably haven’t done it before. You will have written all sorts of things:

  • emails,
  • letters,
  • short stories,
  • social media up-dates,
  • blog posts,
  • txts,
  • reports
  • and much…

View original post 613 more words

A webinar called “Mastering motivation: the neuroscience of engagement and collaboration”

I watched a recorded webinar over lunch the other day, and it became an extended lunch as I took notes for this blogpost. The speaker is Michael Bungay Stanier and he seems to be a leadership coach or consultant to companies. I found the webinar title interesting: researchers are often sceptical of management training, but advice that is based on scientific research must surely appeal!

I’d have liked more linking and references to neuroscience research but it isn’t really about that. It’s about four factors that can influence our brain’s degree of comfort and thereby increase our engagement and collaboration with each other. Much of the webinar is about how we can take control of those factors, and those tips don’t seem to come from neuroscience but are common sense, and familiar to me from other management training that I’ve taken part in. So it’s good, but not what I expected.

Here is my summary of the webinar:

Neuroscience is the study of how the brain works. It tells us how people’s brains are reacting to questions or tests, and we can draw some conclusions from that.

Neuroscience tells us that the human brain needs to feel that things are safe: we aren’t aware of it at a conscious level but the brain is running a programme in the background that is constantly checking safety, and it will lead you away from risky and dangerous things. So it is important that we make our environment feel safe, to reassure our “lizard” or primitive brains. (Entrepreneurs may be able to review situations and see them as less risky than others.)

Michael identifies 4 factors that we can influence, to make the brain feel safe (Nice abbreviation: TERA).

Tribe – In the company of others, your brain is asking: “Are you with me or against me?” So we can try to increase this sense of belonging to the same tribe: tips include smiling, laughing together, small talk at a virtual meeting (Ask people to share their high point of the last week.) and other tactics for achieving rapport and empathy.  Suggests defining a common goal or a common enemy!

Expectations – Your brain is asking: “Do I know what’s happening, can I predict what will happen?” If it’s really obvious what will happen, then the brain feels more comfortable, but if it’s too comfortable then you will get bored and distracted. Setting an agenda is important for a meeting. Be clear about timing and outcomes when talking about things: eg let’s talk about this for five minutes, and in that time we’ll try to come up with x y z. An agenda doesn’t have to be standard, or set before the meeting. We should start a meeting by setting the agenda together: “What are the key decisions we need to make?” Ask a different question at the start of each meeting, to keep things fresh.

Rank – People feel more comfortable if they are high status, or more threatened, if they feel of lower rank. The sense of rank can be influenced.

  • If you are of lower rank and want to increase it: stand up to face the rest of the meeting, when speaking. If you have a question and want to seek help: consider asking yourself first. (See below, the way to answer your question with other questions!)
  • If you are of higher rank and want to make others feel more comfortable: talk at the same level as others, and perhaps sit at 90 degrees to them rather than directly opposite. Praise people. Learn and use names. Listen to each other! Let others go first. If someone asks a question of you and you just give your advice/answer, then you increase your status, but if you respond by saying “that’s a great question, what ideas do you already have”, then you can increase their status. Then ask them, “what else?” Beware of sounding patronising: tone is important, so be genuinely interested in the other person’s answers.

Autonomy – What are the small decisions you can get others to make, rather than you making? Increase reports’ sense of autonomy, and give yourself a break from working so hard! Decide agenda together.

At the beginning, Michael asks you to think of someone who you are trying to manage/lead/collaborate with, and apply this theory. What’s very important to you, in this setting, and what’s least important? And what is important to the other person? At the end, he asks if what is important is the same for both parties. 71% of the people who responded to the poll in the live webinar said that no, it wasn’t the same. Being aware of this might make you do things differently. He asks what two things will you do differently now that you know this?

My two things:

  • Try not to automatically, always answer questions that are asked of me.
  • Start meetings a little more slowly: I’m always eager to get stuck in!

Amongst the discussion at the end, there are lots of tips on how to handle lateness at meetings. And another key phrase I picked up on is that sometimes we have to “pick our battles”. So true!

Peer review of journal articles: how good is it really? A librarian evaluates an evaluation system, for scholarly information sources.

Peer review is a signifier of quality in the scholarly world: it’s what librarians (like me) teach students to look out for, when evaluating information sources. In this blog post, I explore some of the uses, criticisms and new developments in the arena of scholarly peer reviewing and filtering for quality. My evaluation of this evaluation system is fairly informal, but I’ve provided lots of useful links.

What is peer review?

It varies from one process to the next, but ideally, scholarly journal articles are chosen and polished for publication by a number of other scholars or peers in a process known as peer review, or sometimes called refereeing. Sometimes only two reviewers are used per article, sometimes three are used, plus of course the journal editor and editorial board have roles in shaping what sort of content is accepted in the journal.

Sometimes the process is “double-blind”, in that the reviewers don’t know who the author(s) are, nor the authors know who the reviewers are and sometimes it is only “blind” in that the author(s) don’t know who the reviewers are. In this way, the reviewers can be critical without fearing that they might suffer negative career consequences.

However, one problem with peer review worth noting here (although not explored below) is that peer reviewers criticisms can often be brutal because they are made under the protection of anonymity. I also think that the time pressures mean that peer reviewers don’t phrase their thoughts “nicely” because it simply takes too long and they don’t have such time to invest.

Double-blind reviewing is not always possible: it can be difficult to disguise authors’ identity since the research described in the paper might be known to peers, for example when only one or two labs have the specialist equipment used.

There’s more information on peer review over on the PhD Life blog, which explains what reviewers might be looking for and the possible outcomes of peer review. It also explains some of the other quality-related processes associated with scholarly journal publishing, such as corrections and retractions.

Peer review happens in other contexts too, such as the UK’s REF which has been heavily criticised as not the “gold standard” that it should be, because reviews of outputs were carried out by only British scholars, and that a paper might be read by only one reviewer in this process.

Another frequent peer review process is when research funding bids are reviewed and grants are awarded: panels are often made up of peers. I’ve done this and it’s a valuable experience that helps you to hit the right note in your own future funding applications, but it is also hard work, to read all the bids and try to do them all justice.

It sounds good, so why ask how good it is?

Journal publishing is always growing, and peer review is under pressure. A recent scam involving peer reviewing your own papers and its discovery is described by the Ottowa Citizen. Every year I read about papers that have been published in spite of journals’ quality filters. The Retraction watch website highlights stories of published scholarly articles that journals have retracted, i.e. the research findings described are not reliable.

Here are some of the flaws of the peer review process, in relation to journal articles.

1) It takes a very long time

I sense frustration about long journal turnaround times and peer review takes up quite a lot of that turnaround time. When you think about how much pressure there is on academics to write and to publish, how they get little recognition and no financial compensation for participating in the peer review process, how it is important to be seen to be the first to publish on something, and how scholarly work can be sooner built upon when it is published more quickly, it is no surprise to me that review times are not so fast.

2) It’s not efficient

If you submit to one journal and are peer reviewed and then rejected, you can then submit to another journal which might also put your article forward for peer review. Some people might call this redundant reviewing (since the work has already been done!) and it does add to the time-lag before research can be published and shared. As a response, there have been attempts to share reviewed papers, such as when your paper is rejected from one journal but it is suggested that you submit to another journal title by the same publisher instead.

3) Peers themselves get no credit or compensation for their work

There is a service called Rubriq that tries to address this criticism, and all of my points above. They offer a service to authors of having their papers independently reviewed, for a fee. They track the reviewers work in a way that allows them to demonstrate their contribution to the field through reviewing, and they also pay a fee to the reviewers, although this can also be waived by reviewers who can’t earn this way, and it is not thought to be the full value of the input supplied by reviewers.

Authors often suggest appropriate reviewers anyway, so if they supply an already reviewed paper to a journal, perhaps the editor might accept the process from this independent company. Rubriq have a network of journals that they work with.

4) Some articles don’t even reach peer review

A recent piece in Nature News summarises findings of research indicating that whilst journals are good at filtering out poor quality articles through peer review, the journals themselves were not so good at identifying the long-term highest cited papers. 12 out of the 15 most cited papers involved in the study were rejected at first, before finally making it to publication. Perhaps this is because, after rejection by peer review, articles were improved and re-submitted, so the system is working, although I think that the peer reviewers in such instances deserve credit for their contribution. However, this is to assume that the higher cited articles are in fact higher quality, which is not necessarily the case. (See below for a brief consideration of citations and bibliometrics.)

Rejection after peer review is one scenario. The other is also often called “desk rejection”, where an editor chooses which articles are rejected straight away, and which are sent to peer review. Editors might be basing their decisions on criteria like relevance to the journal’s readership, or compliance to the journal’s guidelines and not always on the quality of the research.

The message that I take from this is that authors whose papers are rejected can take heart, and keep improving their paper, and keep trying to get accepted for publication, but in trying to please editors and peer reviewers, we are potentially reinforcing biases.

5) Negative results are not published and not shared

This is another case of biases being perpetuated. There are concerns about the loss to scientific knowledge of negative findings, when a hypothesis was tested but not found to be proven. Such findings rarely make it into publication, because what journal editors and peer reviewers seek to publish is research which makes a high impact on scientific knowledge. And yet, if negative results are not reported then there is a risk that other researchers will explore in the same way and thus waste resources. Also, if research is replicated but not proven, this is potentially valuable to science because it could be that the already published work needs correcting. But the odds are stacked in favour of the original publication (it was already peer reviewed and accepted, after all), such that the replication might not be published. Science needs to be able to accommodate corrections, as the article I’ve linked to explains, and one response has been the emergence of journals of negative results.

What are the alternatives to traditional peer review?

I don’t suppose that my list is comprehensive, but it highlights things that I’ve come across recently and frequently, in this context.

John Iaonnides has written that most published research findings are false, and one answer could be replication. A measure based on replication could be useful to indicate the quality of research. But who wants to reproduce others’ research when all the glory (citations, research funding, stable employment) is in making new discoveries? And it’s not simple to replicate others’ studies: we’re often talking about years of work and investigation, using expensive and sophisticated machinery and quite often there will be different variables involved so for some research, it can never be quite an exact replication.

Post-publication peer review is another possible way to mark research out as high quality. I really like what F1000 are doing, and they explain more about the different ways that articles can be peer reviewed after having been published. I’m not sure that I want to rely on anonymous comments fields, although of course they can bring concerns to light and this is only one kind of “peer review”. I use quotation marks, because if the comments are anonymous, how do you know that they are from peers? But if the peer reviewers and their work are attributed, then I find this to be a really interesting way forward, because one of the pressures on peer review is the lack of acknowledgement, and the removal of anonymity is one way to do this.

I like the concept of articles being recommended into the F1000Prime collection: this is almost like creating a library, except that it’s not a librarian who is a filter but a scholarly community. In fact, many librarians’ selections come from suggestions by scholars anyway, so this is part way to a digital library. (Although I believe quite firmly that it is not a library, not least because access to the recommendations is restricted to paying members.) Anyway, a recommendation from a trusted source is another way to filter for quality. The issue then becomes, which sources do you trust? I blogged recently about recommendation systems that are used in more commercial settings.

I have to mention metrics! I’ll start with bibliometrics, which is usually measuring or scoring that relates to citations between journal articles or papers. For many, this is a controversial measure because there are many reasons why a paper might be cited, and not all of those reasons mean that the paper itself is of high quality. And indeed, there are many high quality papers which might not be highly cited, because their time has not yet come or because their contribution is to a field in which article publication and citation are not such common practice. The enormous growth in scholarly publication has meant that citation indices might also be criticised for too narrow a coverage,

In general, in the lead up to REF2014, researchers in the UK were keen not to be measured by bibliometrics, preferring to trust in peer review panels as a better way to evaluate their research. Yet citation indices allow you to order your search results by “most highly cited”. Would they do this if there was no interest in it as a measure of quality? Carol Tenopir has done some really interesting work in this area.

If you think that bibliometrics are controversial then altmetrics have provided some of the juiciest criticisms of all, being described as attention metrics. Yes, altmetrics as a “score” can be easily gamed. No, I don’t think that we should take the number of Facebook “likes” (or worse, a score based upon those and/or other such measures which is calculated in a mysterious way) to be an indicator of the quality of someone’s research. But, I think that reactions and responses to a published research article, as tracked by altmetric tools, can be enormously useful to the authors themselves. I’ve written about this already. Altmetrics require appropriate human interpretation: pay the scores too much attention and you will miss the real treasures that other people have also missed.

So how good is peer review, really?

It is a gold standard. It is what publishers do when time and resources allow. But it is not perfect and it is under pressure, and I’m really intrigued and impressed by all the innovative ways to ensure and indicate quality that are being explored. Of all the alternatives that I’ve discussed here, I’m most keen on the notion of open peer review, where it is not anonymous but accredited. This might be post publication or pre publication, but I’m keen that we should be able to follow peer reviewers’ and editors’ work.

A lot of these changes to scholarly publishing in the digital era seem to me to mean that the librarian’s role as a filter of information is pretty much at an end. But our role as a guide to sources and instructor of information literacy is ever more important. I would still teach budding researchers to consider peer reviewed works to be more likely to be high quality, but I would also say that they should apply their subject knowledge when reading the paper, and they should look out for other signs of quality or lack thereof. Peer review (and how rigorous it is) is one of a number of clues, and in that sense, nothing much has changed for librarians teaching information literacy, but we do have some interesting new clues to tell our students to watch out for.

Making research collaborations: building relationships

I’m going to be editing and writing for Piirus in the very near future: it’ll be good to work with blog correspondents like Ian, who wrote this great piece on building research relationships.

— Blog

In last week’s blog we focused on how to make initial research connections. This week we look at both why and how to expand on these initial contacts.

Why would I want to put time and effort into building these research relationships?

  • These research relationships may open up opportunities such as future employment, extra research funding and additional journal article authorships.
  • They may also lead to an increased presence of your research on an international stage by association.
  • The collaborator and associated groups are more likely to cite your previous journal articles, again raising your standing in the field.
  • By building the relationship knowledge can be transferred in both directions, and so can staff and students!
  • It also gives the opportunity to reflect on your own research strengths and what areas you wish to explore next.

View original post 453 more words

7 Reasons why I like working from home

Of course there are both pluses and minuses and sometimes these are two sides of the same coin, but this post is a quick and simple list of the positives that I’ve found.

  1. Zero time spent commuting: I can get stuck straight in!
  2. It’s mostly peaceful, so I can really concentrate.
  3. Less money spent on food & drink ‘cos I make it for myself.
  4. I get a proper cup of tea : in Germany, it’s impossible for me to get a British cuppa whilst out and about!
  5. It’s fexible: I choose when to work and when to break, thoughout the whole day and not just 9-5.
  6. I can accomplish household chores in my breaks. Although I can’t put the washing machine on if I’m expecting a call or I need to concentrate!
  7. I can work in my sloppiest clothes if I want to…