Open Access (OA) and all that jazz!

Next week I’m due to visit Humboldt University’s IBI in order to participate in a students’ seminar about Open Access. I’m very much looking forward to it and thought I’d do a bit of reading to keep me up to speed on the OA themes that are trending at the moment.

A recently published article has come to my attention through a LinkedIn group I belong to: Opening Doors, by Rob Virkar-Yates

It describes some of the technical issues that need to be solved, in order to support OA, both at the “upstream” end, where articles are processed for publication, and “downstream” where articles are discovered and read by researchers. Here are a few of the issues raised in the article, along with my comments and thoughts!

1) Direct author-publisher transactions are not part of existing submission processes.
I noticed this when working at the University of Warwick Library: we had to chase both authors and publishers to get Gold OA Article Processing Charges (APCs) paid in time to spend the money allocated to Warwick by HEFCE, and the authors and departmental administrators found the processes rather frustrating and onerous, to the point where at least one author decided not to bother with Gold OA.

The article states that “the majority of academic institutions remain unclear as to how to integrate APCs into their workflows” and I’m sure that many institutions are still working it out: classic issues would be whether to handle OA financial transactions centrally or in departments, whether to use an intermediary service (see the recent RIN report on that topic), and how to ensure a fair and effective distribution of the money.

2) “Open Access is driving some exceptionally contentious changes to the peer review process.”
Virkar-Yates gives eLife and F1000 as examples of OA journals who are innovating peer review by bringing in more transparency about the way an article has been reviewed. I’m interested in the possibility that peer review might evolve as access to content is opened up, but if peer review in its traditional guise is working for academia, then it can work for OA journals just as easily as for subscription ones. That seems to be the conclusion of the Open Library of Humanities project (OLH) in a recently published UKSG e-news article. I’m cautious of worrying academics that the peer review system is under threat, because it’s contentious enough to consider a switch towards OA itself, never mind causing worry that existing, established methods for ensuring quality are about to be abandoned by publishers!

OA publication models do tend to favour bulk publishing, and in a scenario where there are more articles and more journals out there, researchers will need ways to differentiate amongst all the articles, to find the highest quality: they need to do this fairly quickly and efficiently as their time is limited. I think that the existing signs of quality, such as journal impact factor, prestige of the editor and authors, peer review practices, established position in the discipline, etc are likely to remain important for the time being at least. Even PLoS publishes journals that are tailored to disciplines, have lower acceptance rates and achieve higher impact factors than the bulk, cross-disciplinary journal, PLoS One, and OLH seems to be proposing overlay journals, on the bulk of content.

I can see why Virkar-Yates included this aspect, though: publishers of OA journals may find that there are opportunities to develop other aspects of their journals alongside the move to OA, and if you know that quality filters are important in an OA world, then you might want to find ways to add those in: instead of or as well as the traditional peer review.

3) Different formats for content.
The article says “It is now not uncommon for articles to be published with their associated data sets (or links to the data held in OA data repositories), supporting video, animation and other textual resources.” The electronic age has long since allowed publishers to experiment with the format of the journal, or the journal article. Indeed this has been happening with some titles I’ve bookmarked on Diigo, and there have long been disciplinary differences in journal article length, referencing styles, etc: the electronic journal has the capacity to be very different from the traditional print one, but the issue as Virkar-Yates points out, is how to support the different types of output, file formats, etc, on the same platform.

I wonder if the answer is not to offer more specialised types of publication, for different disciplines. I’m a big fan of the e-Crystals repository, and I’ve often wondered what we might do with data repositories, because they seem to me to be most discipline specific types of output, and most useful when they have metadata schemas designed around a specialist type of data and data need. I believe that, in a world of vast amounts of free content, it will be the way that researchers are enabled to handle that content that makes a product worth paying for, and I think this could require an element of specialisation. It’s an interesting space to watch: in Virkar-Yates own explanation of Green OA he points out that “Forty-one percent of all repository usage is through the University of Cambridge’s DSpace@Cambridge platform” and I know that its a repository that has long had a policy of taking all kinds of content, across all kinds of disciplines: is this a model for publishers to follow or should they concentrate on offering something different than repositories?

4) Lack of authentication when access is open
“A signed-in user is a known user, so publishers need to get more consumer-savvy and work out ways to incentivise registration under OA.” Good point, but I think that a lot of publishers have got this covered with their alerting services, saved lists of references and saved search history options that researchers need to sign in for. Joining this sign-in process together with other social media authentication would probably be better for researchers than signing in through institutional logins, and with many platforms the publishers don’t know so much about researchers other than what institution s/he belongs to after authentication, in any case. But perhaps that is precisely what they need to know, so that they can tell Libraries what an invaluable product they are subscribing to!

5) Optimisation for Google by removal of paywalls
Well, this makes sense to me, even though I am a Librarian. I don’t think we’ve been burying our heads in the sand, as the author claims that we have: we’ve simply been trying to point out to researchers that Google doesn’t access all the content that they need, and that there are more powerful ways of searching than the simple keyword that Google uses, when it comes to scholarly content. That doesn’t mean that we would be against Google indexing that scholarly content, if it did it well. In fact, Librarians have also been trying to teach researchers how to get the most out of Google and Google Scholar.

6) Multiple & portable devices
“…all content platforms, and particularly Open Access platforms, need to face up to the very real and pressing technical challenge of how to seamlessly deliver content across multiple untethered devices.” Says it all, for me!

7) Hybrid journals where some content is OA, some is behind a paywall
I’ve never been a fan of hybrid journals as an OA solution, because there isn’t a way for our researchers to know when an article is available to them as an OA one, when their institution doesn’t subscribe to that particular journal. One of the things I used to tell researchers to do when they wanted an article, was to search Google for an OA version. It’s one of the things that I used to have to check document supply request forms for, and frequently found, even some years ago. Hybrid is better than no OA at all, but as Virkar-Yates points out, there is a real issue around the metadata at article level, to make sure that open access content is in fact accessible!

Virkar-Yates’ article prompts much thought and that touches on some very important issues, but there are more that I’d like to consider:

a) Monographs
This topic is suggested in Virkar-Yates’ article, when he discusses output format variety, but monographs seem to me to be a specific issue. OLH are investigating this topic over the next few years, Open Book Publishers have just won an award and the Wellcome Trust have just announced plans to extend their OA policy to include monographs and book chapters, according to this Times Higher Education article, although I note that this extension does not include the CC-BY (Creative Commons Attribution) requirement that exists for journal articles.

b) Copyright
One of the hurdles for OA is to differentiate between access by a reader and access that allows further copying: five years ago, when I was establishing Warwick’s repository, WRAP, it seemed clear to me that the priority was to allow readers to have access. Every item in WRAP had a cover sheet explaining that the copyright remained with the publisher or author and that copying of the repository item was not granted by the repository. Allowing Creative Commons licences to be attached to items was a development that I would have liked to have added (and I know that Loughborough University’s repository has always asked for one), but I knew that there were already a lot of hurdles to deposit and that frankly, a requirement to add a licence that the author had never seen before and quite often did not understand would be one hurdle too many.

I expected that WRAP could overcome it in time and indeed I can see amongst the latest additions to WRAP that some do have cover sheets explaining that a CC licence applies. The RCUK OA policy expects the copyright issue to be addressed, as they have followed the Wellcome Trust in making requirements for not only OA, but also CC licences. A large national body like the RCUK has a way of reaching and influencing researchers that a new repository manager does not have!

c) Platinum OA
This was described in an Information Research article from 2007, and it’s essentially where researchers publish OA journals for themselves. It doesn’t quite fit the remit of Virkar-Yates’ article, in the sense that most researchers won’t be able to do this and be at the cutting edge of technology in publishing practice! But in the rise of OA, there has been a rise in the number of OA journal titles (as evidenced by the reported titles listed by the DOAJ, which the Virkar-Yates refers to), many of which originate from the research community.

My final thought is that I should read the recent JISC/RLUK survey report, on the attitudes and behaviours of researchers, which apparently reveals their reliance on open access… but that’s too much for one sitting!

Advertisements

Writing a book chapter for an “unbook”.

Only Connect: Discovery pathways, library explorations and the information adventure” is the title of a book that I’m very pleased to be contributing to. I was thrilled to have my chapter proposal accepted earlier this year, and this week I’ve been getting on with the actual writing, since the deadline is 1 June! Just a final proofreading due today though, so all under control.

My chapter’s working title is “An Educational Researcher’s Journey” and it’s co-authored with a researcher at the University of Warwick, Mairi Ann Cullen, whose story of a literature search is a great example. We’re re-purposing some blog posts that Mairi Ann wrote for Warwick’s ResearcherLife and I’ve extracted 26 interdisciplinary literature searching principles from her journey, which together make a kind of A-Z guide for a literature search journey.

I really do like the metaphor of a journey for literature searching: unless you are a seasoned researcher then you should think of it as a pedestrian journey in a foreign city, where you only just grasp the language and you have no notion of the scale of the city or how far away your destination is. When searching, you can get side tracked by interesting distractions or find short cuts, and there are different modes of transport available to you (i.e. search platforms).

The more you prepare for your journey, writing down directions and looking at maps and guides, the quicker and more stress-free the journey will be. However, perhaps our minds are better programmed to remember actual journeys than literature searches, so for a literature search then you need good record keeping, too…

Of course, this metaphor appeals to me since I’ve been in Berlin for less than two weeks, and am constantly looking at streetmaps and underground plans and Google Earth pictures of places that I need to navigate, before I set out!

I’m really looking forward to the launch of the book, as it is has its own blog where you can read about some of the truly innovative content that is due to be published in Autumn this year.

Colloquium at IBI, Humboldt University: “Its all about Data” with Peter Schirmbacher, Maxi Kindling and Elena Simukovic (Tues 21 May 2013)

 

This week I attended this event about data management in Berlin, or “digitalen Forschungsdatenmanagement”: it was all in German! I understood more than I thought I would, probably because I recognised a lot of the issues that the presenters raised. And because the speakers used helpful slides.

An interesting aside: at the end of the talks, I waited for the usual applause but was taken by surprise when everyone knocked on the tables instead: apparently, that’s the German way. Also, Humboldt Uni is very close to Museum Island in Berlin and pretty easy to find!

So, this blog post has the highlights that I understood: I’m sure there were more significant aspects, but I still need to learn more German to get the most from such events.

Professor Schirmbacher had a great slide describing the organisation of information as the large bubble containing smaller concepts, including knowledge production, communication and distribution. He spoke about various aspects of those three bubbles and I picked up on the communication one where he mentioned the issues around quality and reputation measurement. Probably because these are issues I’ve always been interested in, or perhaps because “Qualität” is an easy word to pick up!

Here is a lovely sounding German word that I didn’t know before: “Nachnutzung”. It means re-use and it reminded me of the phrase that Simon Hodson opened Warwick’s recent event on data management with: “the first person to re-use your data is your future self”. Evidently it’s an issue in Germany as well, but I didn’t pick up on all the issues. The seminar touched on the legal aspects associated with information, but again, the detail eluded me.

I made a note of the examples of best practice from the UK that Maxi Kindling mentioned: University of Cambridge, University of Glasgow and Imperial College. Other examples from the USA and the rest of Europe were also examined as part of Maxi’s work, and I note in the booklet that I took away that the University of Edinburgh also provided a useful example.

Elena Simukovic presented details of a survey: they used Lime Survey (https://www.limesurvey.org/) to run it, in German and in English. The survey was open for six weeks and there were 490+ respondents.

Results from the survey were broken down by discipline and/or by career stage in the presentation. Researchers were asked 24 questions about things such as:

  • The characteristics of their data gathering, whether they might be observations, experiments, simulations, etc.
  • Media of data held, eg pictures, audio, video, text, etc.
  • Specific types of data, eg sequential measurements, topological information, satellite pictures, etc.
  • How the data is archived/stored.
  • Would their data still be available in 10 years time? 

There was quite a bit of discussion about the longer term availability of data: apparently there has been a principle in place that this should be ensured, for more than ten years, but 20% of respondents didn’t know about it and more professors than lecturers knew of the expectation.

When asked whether they would deposit data, 60% answered positively to some degree, and of the data that they would deposit, 50% would be under 100GB in size. I wanted to ask a follow up question to that one, about how often they would deposit such files: it may well have been asked but I didn’t pick up on it.

Researchers were also asked about the support they might like, and it seems that there’s not so much call for help with data management plans (DMPs) in Germany as for other aspects, but then it seems that there’s less imperative here for researchers to write them. In the UK, the Research Councils examine DMPs at the grant application stage but that’s not the case with German Research Councils. Even so, 125 respondents did tick that option, and the most popular support chosen (275) was for “Speicherplatz”, which I take to mean storage space.

The next stage in the research at Humboldt will be interviews with researchers. It occurred to me that another approach to researching this topic could be an examination of data management plans.

This event was part of a series called BBK, or “Berliner Bibliothekswissenschaftliches Kolloquium” (http://www.ibi.hu-berlin.de/bbk). Gosh, I hope I spelt it right: it’s easy to get lost in these compound German words!

Finally, my confession & thanks: I spoke to Maxi Kindling after the event, in English. My German has a long way to improve! Many thanks to Amber Thomas, now at the University of Warwick for putting me in touch with Maxi Kindling in the first place, and to Maxi for inviting me along.

A new start, in social media steps.

I wanted to record some of the social media related steps I’ve needed to take, since leaving the University of Warwick:

1) Start a new blog here on WordPress because I was using Warwick’s own blogging platform. I’ve got a lot to learn about how WordPress works and I’m yet to add information “about me” and to learn about its features: as I do learn, I shall blog about it. (Aside: I find it amusing that the spell-checker here can’t recognise “blog” or “blogging” or even “WordPress”!)

2) Edit Twitterfeed to stop picking up on my old blog and start picking up on this one. Twitterfeed is one account that I forgot to update with my personal e-mail address before I left Warwick, but fortunately I remembered the password. Phew!

3) Install the Evernote web clipper onto my Chrome browser on my home computer.

4) Install the Diigo bookmarklet on my home computer.

5) Add Hootsuite as a homepage tab on my browser. (I also need to revise the settings for e-mails and texts from both Twitter and Hootsuite, and to generally clean this account up!)

6) Change my job title status on LinkedIn, and I need to edit the text all about me there, too.

So there is a lot still to do!

Preparing to leave

I finished work on Tuesday, and here is a picture of the cake from my colleagues. Thanks to everyone at Warwick for an amazing send off, lovely presents and all the good wishes in my card.

I have been surprisingly busy, the rest of this week. I have some freelance work already lined up, so have been writing back cover copy for a publisher. I have also been to see a friend of a friend who checks facts in educational books as a freelancer, and who might be able to put some work my way that can be done from sunny Berlin, which is where I am headed. It’s interesting to see how the information world works from the publisher’s side!