The end of the summer and beginning of fall (for academics) is the busiest time of the year - I'm swamped! A second reason for my silence is that I've been thinking more deeply about some other issues within the academy. Recently I became part of a team appointed to look at our internal assessment activities in the libraries and also determine the scope, depth, and impact of our organization to the campus administration and beyond. In a word - ROI. We must show it and we must prove it to others.
Now I know a lot of academics think assessment and ROI is a dirty word, rife with assumptions that curricula and teaching pedagogy will be micomanaged and misinterpreted by souless bureauocrats, and perhaps even altered at whim to meet fictitious benchmarks, much like Winston Smith in 1984. I'm not denying this can and does happen. Assessment is a popular buzzword today in the academy, and as long as the recession lasts I suspect it will stay high on the radar screen.
In libraries assessment has generally been a numbing and jumbled mixture of circulation transactions, budget and financial data on products, usability studies with patrons on technology interfaces, products and services, and the occassional collections analysis or technology inventory. In short, a lot of things are measured but not necessarily considered holistically or even determined if the measure is worth the time and effort to maintain. Coupled with this phenomena is the requirement that libraries provide statistics to the member organizations to which they belong, such as ARL, ACRL, and the like. In many cases the reporting requirements change yearly and libraries must anticipate how to best answer these questions. Inherent in all of this reporting, of course, is the desire to have the numbers provided show your library in a positive way: rank highly on the ARL list, high ILL lending to other institutions, or any other measure. So this assignment I've been given is going to be very interesting, because it will go to the heart of many of the things we do and how much of our time, staff, and money we spend on them.
This development dovetails nicely with the recent discussion on blogs and in the science 2.0 community on the future, purpose, and access to supplemental journal materials and the decision by the Journal of Neuroscience to stop accepting journal supplemental materials. Martin Fenner has a great blog post summarizing recent activity on the subject. I agree with him this decision brings up larger questions, like what is the concept of a scientific paper in the 2.0 (and 3.0) web environment? What should a paper contain? What is most important? What do researchers need to duplicate (or ignore) in their competing and complementary work? How should it be made available? And how will it be preserved or changes tracked to ensure proper attribution and recognition? This concept of the scientific article, just like the current state of assessment workflow and methodology I describe for libraries, will need to determine what researchers need most and if the material is successful in meeting needs or generating funding or other support.
This is a big challenge indeed, given the current state of disagreement among science disciplines about the use and deposition of materials in preprint repositories, practices and expectations for open shared data sets, the lack of well-defined standards for describing data, and the need to archivally preserve materials for future generations. Plus as we look more broadly, will this new(er) concept of the research paper, assuming there's consensus on the result, work outside the sciences in the social sciences and humanities? It's well-known there's already a difference of opinion today in how each of these areas create, disseminate and evaluate peer-reviewed content. Can a new model be created that works for all disciplines?
So in my opinion supplemental data is only the tip of the iceberg, just like the circulation or other operational data collected in a library. There's a lot more to consider, and no easy answers at this stage of the game. Other qutions are likely to emerge. What will the basic unit of research become? Will funded research continue to assume greater status than unfunded work? Can collaborative work be assessed fairly and acurately to determine the contribution and effort from each member? These questions will need to be answered.
My library perspective tells me that ROI will not go away, and is likely to take on an even larger role in decision-making as technology becomes more data-driven and academics continues to require direct evidence of meeting specific measures to show success of programs and curriciula. We may all become our own Winston Smiths someday.