Last weekend was BibCamp Hannover, a “BarCamp for librarians and other hackers”. If you understand German, you can read about the sessions, discusions and people in the Blog, Wiki, and FriendFeed Room. And Steffi Suhr wrote a nice post about The most beautiful library in the world in her Nature Network blog.
The BarCamp format worked very well for us – we used traditional pen and paper for session planning:
Me, helping with session planning last Friday. Picture by Martin Kramer .
I had suggested a session about Institutional Bibliographies. We combined this with two related suggestions and had a very interesting session about the communication of scientists with librarians. One interesting theme in the discussion was the notion that the scientific workflow seems to be broken at some specific steps, for example:
Deposition of post-prints in institutional repositories
Most journals allow authors to make the final version of an accepted manuscript (i.e. after peer review) publicly available via an institutional repository. Most librarians would be happy to help their authors with this step, but unfortunately have no good tools to track the papers published at their institution, and never see the final version of accepted manuscript (many journals don't allow publisher-generated PDFs in repositories).
Re-submission of rejected manuscripts
Rejected manuscripts are usually resubmitted to a different journal (most manuscripts will eventually be published somewhere). Unfortunately the next journal will most likely use a different manuscript format, different citation styles and a different manuscript submission system. Some journals provide a manuscript transfer service, but the comments made in the peer review are usually not available to the editors and reviewers of the next journal.
Connecting scientists to publications
This is a topic that I have written about before (ORCID or how to build a unique identifier for scientists in 10 easy steps). Until we have unique author identifiers, it is difficult or impossible to reliably find the papers published by a particular person (a good number of papers by Fenner M in PubMed are not written by me).
Distributing papers for a journal club
Another topic I have written about before (Recipe: Distributing papers for a journal club). The problem is not only that email is really bad for sending PDF files to a group of people, but that most journals don't allow redistribution of their content, even within an institution with an institutional subscription.
The number of times a paper is cited is often used as a proxy to the importance of the science in that paper. Citation counts (e.g. in the form of Impact Factor or H-Index) are often used to evaluate researchers. There are many problems with this approach, because citation counts are influenced by many other factors (e.g. time, popularity of the subject area, self-citations). But the biggest problem is the fact that there is no general agreement on how to count citations, and no database that makes this information freely available.
It might make sense to make a list of these broken steps, and then estimate the effort that would be required to fix each of them. Some broken steps are more important, and some fixes easier than others, so this exercise would give a good list of action points.