Archive for: February, 2010

Tidbits, 26 February 2010

Feb 26 2010 Published by under Tidbits

It's Friday! Snack on some tidbits.

As always, tag a delicious link with "trogool" or leave a comment here if you have something tidbit-worthy. Thanks!

No responses yet

OASPA: act now or lose credibility forever

Feb 26 2010 Published by under Open Access

So the backstory of the truly horrific murders at the University of Alabama at Huntsville has taken an open-access turn: the perpetrator (not being a journalist, I don't think I need to say "alleged") got a rather dubious-looking article published in an open-access journal.

Further investigation into the journal only heightens concern; while we're not quite talking about Bentham or SJI here, we're definitely in that ballpark. I won't rehash details, because Richard Poynder has it covered with admirably succinct directness. I believe what he's recounted, and I agree with his analysis in its entirety.

Let's talk for a moment about the credibility of open-access journal publishing in general, and OASPA in particular.

As Peter Suber is fond of pointing out, open-access and toll-access journals are more similar than they are different. Both need to cover costs some way or other. Both apply peer review or don't, in roughly equal proportions. A subset of both charges author fees (yet somehow the "vanity publishing" brush only seems to tar OA, go figure). Both struggle to establish themselves when new. Both rely on free authoring, reviewing, editorial, and sometimes production labor. Et cetera.

It should seem natural, then, that both open-access and toll-access journals contain bad seeds, suffer scandal. For every Bentham, there's an Australasian Journal; for every SJI, there's an El Naschie. (Well, actually, I would guess there are quite a few more Australasians than Benthams lurking out there, because the toll-access slice of the journal pie is still so much larger, but you take my point.)

It seems to me there's one important asymmetry in a journal scandal or failure, however: its transparency. A toll-access journal belonging to a large publisher can disappear practically without a trace, especially if most hints of its existence are limited to membership in yet another gigantic Big Deal bundle. Its editors quietly sidle away; its web page quietly vanishes into pixeldust. Nobody is particularly tarnished by the failure (not that anyone necessarily should be, of course).

It's not quite that simple for an open-access journal. Most of the ones I know of that have folded never had a coherent shutdown plan. The website can't (or at least shouldn't) just vanish unless the actual content has been handed off, so it just sits there on the open web and moulders, its failure obvious to anyone who borrows a bit of Google's or the Internet Archive's all-seeing eyes.

In my more cynical moments, I wonder whether the DOAJ's journal-archiving plan was developed partly in response to this problem, in hopes of being able to shut down dead OA journals halfway gracefully. If it was, good on 'em.

Toll-access journals also have an easier time concealing scandal, not least because they are not subject to the gaze of Google's pitiless eye. Sure, once in a while they get found out and some corporate type has a few sleepless nights, but how much dross slips through the system because the system is too big and too closed to monitor after the fact? You could ask China, I suppose.

(If your knee-jerk answer to the previous paragraph is "peer review!" please take your dunce cap and go sit in the corner. Peer review is a leaky heuristic at best. It fails, often, and that's when it's done in good faith to begin with! We should in fact expect it to fail—and, given the stakes, to be gamed. Necessary? Maybe. Sufficient? Not by a long shot.)

It's not so simple for open-access journals, for reasons I hope are obvious from what I just said about toll-access ones. It doesn't help that (please pardon my bluntness) a fair few OA journals, particularly of the shoestring-budget variety, haven't really thought through such ugly scenarios as plagiarism, fraud, innocent but nonetheless major errors, and suchlike phenomena requiring article retractions. (Neither have institutional repositories. They should. I have, though I'm still stuck on how best to find out that something in the IR I run needs retraction.)

Moreover, because open-access journals are new and academia is conservative, OA-journal scandals are more likely to run into scrutiny and opprobrium. Unlike the neverending stream of FUD coming from the for-profit toll-access journal industry and its quislings, this isn't a bad thing; in fact, it's a good one! We want bad actors to be discovered and removed from the system! It's only bad insofar as it unfairly colors academia's perception of open-access journals generally—which it unfortunately does. (I dimly sense a pattern emerging in public academic discourse of PLoS/BMC/Hindawi as "okay OA" versus "all that other vanity-publishing dreck." It's not a fair or an accurate characterization, but I keep finding traces of it.)

Now we come to OASPA. When OASPA formed, I speculated about whether it would take on basic journal quality-control duties. I hoped it would, because the more OA-journal scandals can be prevented and punished, the better for OA journals' credibility. Indeed, I wondered whether transparency-led freedom from scandal could turn out to be good for open access:

I think an OASPA certification program represents a tremendous opportunity for the OA community. Gold OA is still small. It’s much easier to put meaningful quality regulation in place over a small, emerging, prestige-hungry industry. If gold OA manages to do that, then it suddenly has another competitive advantage over toll-access, which hasn’t done so and (given its extent and decentralization) very likely can’t.

Later, OASPA said outright that making decisions about quality was indeed within its scope: "OASPA aims to become the stamp of quality for open access publishing." I rejoiced.

I am not rejoicing now.

Here's the thing, OASPA: being a stamp of quality means stamping out bad practices where and when you find them. Yes, even when doing so is awkward and uncomfortable. In the case of Dove Medical Press and the International Journal of General Medicine, you have conspicuously failed to do that. Your comment to Richard Poynder regarding Dr. Bishop conspicuously misses the point: nobody is asking you to opine about Dr. Bishop or her record; they're asking you to investigate the practices of Dove Medical Press because of what looks on its face like an extremely dubious (and now, conspicuously dubious) publishing decision.

You should have jumped on that, tragic circumstances be damned. Because you didn't, your "stamp of quality" has been tarnished. It's even worse that Dove is an OASPA member; I certainly hope you're not cutting sweetheart deals for membership fees, but I'm afraid that's how it looks from my worm's-eye viewpoint. And because you've mounted your flag in the stamp-of-quality territory, your first-mover advantage means you will be hard to supplant if you go rogue—not to mention that if you do turn out to be corrupt, OA suffers a major and possibly unhealable black eye, because you're all the stamp-of-quality heuristic there is.

This gaffe can be recovered from, OASPA, but I urge you to act fast. Apologize. Own the mistake. Start an investigation of Dove now, explaining clearly and publicly what you're looking for and what you'll do should you find that Dove has erred. It's probably not too damaging that you don't yet have a standard procedure for such investigations, given how young you are, but another thing you need to say clearly and publicly—and with a due date—is that such procedures are under active development. A screening procedure for OASPA applicants is a good idea as well.

Referring screenings and investigations to a well-chosen quorum of disinterested but appropriately critical third parties—dare I suggest "academic librarians?"—might work out well for you. Please consider it.

From the bottom of my heart, OASPA, I beg you: do not compound this error. We OA advocates need a responsible steward and monitor much too badly for you to go and squander all the initial goodwill you garnered.

This seems an opportune time to remind people of Book of Trogool's comment policy. I will enforce it if I need to. I'd rather not need to, please.

8 responses so far


Feb 23 2010 Published by under Metablogging

I've been interviewed by Bora Zivkovic, apropos of many things. Click over if you've a mind.

2 responses so far

Librarians: down with the impact factor!

Feb 22 2010 Published by under Praxis

The journal impact factor is a sham and a crock and a delusion, let's just take that as read. (If you don't care to take that as read, which is a healthy and sane attitude—take no one's word as gospel, especially not mine!—start here or perhaps here and keep going.) Using it to judge individual researchers' output, never mind the researchers themselves, verges on the criminal, is my strong belief. I'm not against heuristics, but some heuristics are plain broken, and the journal impact factor is one of those.

So it really hurts my heart to see librarians giving this flawed number credence. Librarians! We who call ourselves information experts!

I won't link-and-shame, despite temptation. I'll just say that in the last week, I've come across one library blog posting a list of "here are high-impact journals in X discipline" and another library doing a workshop on "use the impact factor to choose where you publish!"

We are better and wiser than this. I hope we are better and wiser than this.

Of course we can't ignore the impact factor. That's a very long way from saying that we ought to celebrate, support, or draw positive attention to it. When we mention it, we should wrinkle our librarianly noses in disdain. When we teach workshops on it, our attitude should be "look, this system is bad and wrong and I'll happily show you why, but we're stuck with it until everyone wises up, so here's how to game it as best you can." When we look over serials subscriptions, we should frankly ignore it.
It doesn't hurt that by breaking the back of the impact factor, we're reducing the influence of many of the very journals whose inflated prices are breaking our backs.

We have authority and power. This is one very serious situation in which we owe it to our researchers and ourselves to use it wisely.

5 responses so far

A shift in focus

Feb 19 2010 Published by under Tactics

I've altered the tagline on this blog slightly, to reflect where it seems to be going. (I am not in control here; I am merely the author-function! Sorry, sorry, lit-crit joke.)

At the same time, I've been thinking a lot about library collections, what's in them and how it gets there. (I'm teaching a graduate course in collection development at the moment, which has of course bent my thoughts in that direction.)

Here's where I'm sitting, and my commenters (who are smarter than I am) are welcome to challenge me. When collection development came into its own in academic libraries, forty years or so ago, it became a key locus of competition among libraries. The library that dies with the most books wins!

Of course, it's not quite that simple. Individual collections of particular excellence count as well; research libraries do have their specialties. Special collections is its own locus of competition, and so are various forms of digital collection-building. Still, when it comes down to it, the measurement we reach for most often to characterize ourselves is collection size. (The second most common measurement is probably collection budget, which itself is a proxy for size.)

Some notable problems have arisen with this siloed collection method. Perhaps the largest is that it's no longer affordable to build a sufficient collection, never mind a specialized one, on an individual-institution basis, what with the serials crisis and the immense growth in publications of all kinds.

Another problem coming to light is the considerable cross-institutional overlap in collections. It turns out that when you leave a lot of individual smart people to prioritize collections in a particular area with limited budgets, they mostly collect the same stuff, leaving a substantial pool of material collected in such low quantities that a natural disaster or an ordinary in-the-course-of-business book loss means there may be hardly any (or even no) copies left. This is, of course, a threat to the scholarly record. What isn't collected by research libraries with a serious commitment to preservation, often doesn't survive.

Rare-materials surveys are ongoing, so we don't understand the full scope of the problem yet, but already it's becoming clear that quite a few print materials, some too fragile to be saved by such initiatives as Google Books, are held in so few libraries that their survival is in serious doubt. Moreover, print runs of scholarly monographs continue to decline, and even today's meagre runs don't get bought. I heard a literature scholar once who was pleased that her monograph, representing three years' work, would receive a 250-copy print run from its press.

Not a few blogs have more readers than that… but I digress. The problem from a scholarly perspective is that this monograph is in serious danger of permanent, irrevocable destruction because it will likely not be collected, held, and preserved by enough libraries.

All this bother, ultimately deriving from an emphasis on local collection practices: collect from the world for your local patrons, and if that myopia causes systemic problems, too bad.

Well, what's the alternative, then?

Shortly after I started this post, Barbara Fister's lovely, fiery essay on Liberation Bibliography came out. She has since published another suggesting that libraries need to look up from their locales, acknowledge their part in the current difficulties, and move decisively toward open access. Unsurprisingly, I completely agree.

What Barbara envisions, I think, is a shift in the focus of collection development. Rather than collecting from the wider world for the local patron base, collection developers will collect from the local patron base, everything from datasets to postprints, in order to make it all available to the world, in the short- and the long-term.

Collection developers are now demanding of me, "But what about the winnowing function of collection development? If we don't limit our collection by our well-honed instincts about what our particular patron base needs and can best use, they'll be swamped!"

To which I respond, "How ya gonna keep 'em down on the farm now that they've seen Paree?" Like it or not, the information-discovery universe has gone global, Internet-wide. Filtering is still important, heaven knows, but we can't do it via collection development any longer. We'll have to find other ways.
And how will we judge our own quality, if the easy numbers are taken away from us? I believe that metrics will shift from what we buy to what we contribute to the commons. Hathi Trust is a good beginning, but only a beginning; there's much more we can do. Under such a regime, supporting DOAJ and SCOAP3 and PLoS and arXiv isn't a dubious burden, threatening precious collection-development dollars; it's the heart of the mission, the most important arbiter of research-library quality. Under such a regime, the institutional repository isn't a careless afterthought; it's where the library magnifies the institution's value.

This shift won't happen overnight. It's not happening at all in most libraries, as best I can tell. Perhaps it won't.

Still, I think it should. I'd add "Liberation Bibliographer" to my business card, if I dared.

No responses yet

Tidbits, 18 February 2010

Feb 18 2010 Published by under Tidbits

I'm home sick today, and not precisely looking forward to giving my class tonight because I really do feel wiped out. Fortunately, tidbits posts are easy…

As always, leave a comment here or tag something "trogool" on if you think it belongs in a tidbits post.

No responses yet

Turf... wars?

Feb 16 2010 Published by under Tactics

I have a very lengthy post in pickle that is taking me some time to work through. Forgive me; sometimes that's what blogging is for, though it's tough on the posting rate.
In the meantime, a small thought about improving interaction patterns between scientists and librarians, something I still very much think is necessary for both groups.
Cameron Neylon notes in his quick review of the new FriendFeed-based ScienceFeed that the name is not ideal:

Finally there is the problem of the name. I was very careful at the top of this post to be inclusive in the scope of people who I think can benefit from Friendfeed. One of the great strengths of Friendfeed is that it has promoted conversations across boundaries that are traditionally very hard to bridge. The ongoing collision between the library and scientific communities on Friendfeed may rank one day as its most important achievement, at least in the research space. I wonder whether the conversations that have sparked there would have happened at all without the open scope that allowed communities to form without prejudice as to where they came from and then to find each other and mingle. There is nothing in ScienceFeed that precludes anyone from joining as far as I can see, but the name is potentially exclusionary, and I think unfortunate.

I wish I disagreed with this… but I don't. I myself would feel a bit leery of signing onto something called ScienceFeed; it took me some time before I went to claim myself a ResearcherID, even! (Yes, part of that was general dislike of Thomson Reuters, but what with ORCID waiting in the wings I rather felt I needed to sign up finally.) It's not that I'm terribly afraid of scientists, because I'm not; I wouldn't blog here if I were. It's that I try (though I sometimes fail) to be a moderately polite person, and gatecrashing somebody else's party isn't polite.
Something called "ScienceFeed" feels like somebody else's turf.
Two lessons from that, I think, or perhaps three. One is Cameron's: to foster cross-campus, inter-institutional, and interdisciplinary connections and collaborations, it helps to find or make neutral turf. Making neutral turf attractive is not necessarily easy, because neutral turf means everybody has to leave a little of their comfort zone behind, and nobody likes to do that. Still, FriendFeed shows it's at least possible.
(Zotero? Mendeley? Are you listening? I think you are ideal candidates for neutral-turf social/professional encounters.)
Another lesson, predictably, is that we librarians need to get over our fear of gatecrashing research gatherings, from labs to conferences to online venues. Like it or not, we're on the low end of this particular power continuum; that means we're the ones who have to move into their spaces to interact with them, because they don't need to touch ours and therefore won't.
A third lesson, perhaps slightly subtler, is that it may well be easier to gatecrash online venues—Facebook and LinkedIn groups, FriendFeed, Twitter hashtags—than in-person ones, at least to start. If I'm right, it means that science librarians who don't play those "silly Web 2.0 games" (yes, that's a direct quote, and no, I won't identify its originator) are harming not only themselves but potentially our profession.
I tell you what, though, online gatecrashing surely seems to work as an outreach tactic. I'm not sure Cameron would have been thinking about librarians before FriendFeed. Now he knows Christina and John and me, among others—and he's telling his peers about us.

5 responses so far

Academic samizdat

Feb 09 2010 Published by under Open Access

Since early days indeed, it's been possible to bypass journal publishers and libraries in a quest for a particular article by going directly to the author. Some publishers have even facilitated this limited variety of samizdat by offering authors a few ready-made offprints.

I've even had publishers give me e-offprints (which to me, preprint disseminator that I am, just feels weird). The repository software ePrints can place an "ask the author" button on items that are withheld from public view for whatever reason. As best I can tell, just about everyone involved in scholarly communication thinks this form of mildly extra-legal distribution is all right.

Times are changing, however, and academic samizdat is taking on new forms. Exactly what the responses will be remains to be seen, but early indicators exist.

Consider website distribution of typeset publisher PDFs, or clumsy scans of typeset articles. This is legion (see Wren 2005 for details), and as any competent institutional-repository manager will tell you, it is often not legal. If you, researcher, signed over your copyright to your publisher, and your publisher did not in turn grant back a license to use the work in this fashion, you have violated the publisher's copyright, and the publisher can sue you for it.

Yes. Really. Planning to read your next publishing agreement now? Good. You gladden this librarian's wizened heart.

Now, suing the producers of your stock in trade is fairly suicidal business practice. Publishers aren't stupid; they know this. They've turned a blind eye to this practice, treating it as an extension of ordinary academic samizdat… for the most part. If you check SHERPA/RoMEO, you find fairly quickly that publishers restrict other forms of samizdat that they find more threatening to their businesses: reposting in disciplinary repositories, notably, and you'll even find a few who object to institutional repositories.

There's a troublesome sign, however: the lawsuit by Oxford University Press, Cambridge University Press, and SAGE Publications against Georgia State University. This was reported in the library press as being about practices in library-managed electronic reserves, but as I understand the matter, the presses also objected to articles posted on faculty websites and in university courseware. Twists and turns abound (this seems to be the latest situation), but the case hasn't yet been settled, so there could be a ruling that disallows faculty posting of articles they authored on their own websites.

That would be an interesting day indeed.

Of course, the articles posted might not have been the faculty members' own articles, which opens another can of worms. As access diminishes, both through library-budget impasses and through (perceived or actual) difficulties of securing even access to that which a library has paid for, samizdat flourishes. Through web bulletin boards, through "journal clubs," through chains of friends across institutions, through designated sites, even, samizdat flourishes. As journal-publisher profits are squeezed, this practice will no doubt invite scrutiny.

What will the publishers' response be? Again, suing faculty directly is distinctly unwise, so I don't expect an RIAA-style rathunt with its associated individual lawsuits. (One or two individual suits against particularly egregious examples may turn up pour encourager les autres.) Intermediaries between publisher, author, and reader, however, may be fair game; that's how Georgia State ended up in the crosshairs. I wouldn't want to be the site I linked in the paragraph up above, either.

Some of this samizdat activity is happening on social-networking sites (I won't say which; I'll just note that I've personally seen it). They may become lawsuit targets as well, though the really big ones may well be immune.

Another possibly-litigatable intermediary is the humble citation manager, which is managing entire PDF libraries these days. Zotero implemented its online sharing very carefully indeed: citations, links, and DOIs are shareable, but PDFs are not. They should stay out of trouble. Mendeley, however, appears to have some direct file-sharing features, and may therefore be vulnerable.

And finally we have just plain stupidity, such as that displayed by the people behind OpenThesis. There's been quite a dustup on the ETD-L list over their practice of harvesting thesis metadata and associated content files through OAI-PMH and some custom programming (because OAI-PMH does not enable file exchange) for display and dissemination on their own website.

Let me count the ways in which this is idiocy:

  1. Dissertations are copyright to their authors; many of these authors take a lively interest in their dissemination. Anyone who's started an electronic thesis and dissertation program can attest to that. Journal-article authors (though not publishers) are laissez-faire by comparison. In fact, it was a thesis author who raised questions about OpenThesis on ETD-L.
  2. Dissertations made available through institutional-library websites invariably involve some sort of license granted by the author to the institution. We're careful about that kind of thing in libraries. These licenses are not transitive; third parties do not have the same license that the library has by virtue of downloading a copy of the dissertation from the library's site.
  3. Because librarians have trodden many an eggshell to achieve viable ETD programs, elephant-footed behavior like that of OpenThesis threatens to tar us with a very bad brush. This perhaps helps explain the very cool reception OpenThesis received on ETD-L. Pro tip: angering librarians is bad business for a would-be content purveyor.
  4. When challenged, OpenThesis claimed (my paraphrase) that anything they found on the open web was fair game, so they meant to go on doing what they were doing. Do these people even have an IP lawyer on retainer? In the US, at least, wilful copyright infringement invites rather heavier penalties.

But these are theses, not articles, Dorothea! What's the relevance to academic samizdat of articles?

I invite you all to consider Scientific Commons. Now there is an outfit ripe for a lawsuit. Interesting times...


Wren, J. D. (2005). Open access and openly accessible: a study of scientific publications shared via the internet. BMJ, 330, 1128.

2 responses so far

Looking somewhere other than under the streetlamp

Feb 02 2010 Published by under Open Access, Tactics

Perhaps shockingly, I don't plan to so much as try to wade through all seven-hundred-odd pages of this report on scholarly-publishing practices. It's thorough, it's well-documented, it's decently-written… and based on the executive summary (itself weighing in at a hefty 20 pages), it won't tell me a thing I don't already know.
Academia is conservative. Academia thinks its current scholarly-production system is just fine and dandy, thank you. Academia has a love-hate relationship with peer review. Academia wants to outsource its tenure and promotion decisions any way that is convenient and looks just barely irreproachable enough.
None of this is news. It's dispiriting, but it's not news.
I invite you, however, to take a look at the survey population. "45, mostly elite, research institutions" (p. i) they drew their sample from. Just on the face of it—if we're looking for change in scholarly communication, especially disruptive change, elite researchers in well-established disciplines at elite institutions are the wrong place to look.
Of course such researchers don't want the hill disturbed—they're king of it, aren't they? They're the people for whom "sustaining innovation" is designed, in Clayton Christenson's parlance. They're the very tippy-top of the academic prestige market; they are the last to notice, much less use, a disruptive innovation.
For similar reasons, we don't want to look at the big, established journals and publishers for disruptive innovation. Sustaining innovation, yes, plenty of it. But once again, the king of the hill doesn't allow mining underneath him when he can prevent it.
"But there's better light over here under the streetlamp!" goes the old joke. So where might we look instead, despite the darkness? Well, I have some ideas.
Interdisciplinary, inchoate novelties like the "digital humanities." Young, impecunious disciplines. New journals—what is the proportion of OA to TA journal launches these days, and how is that ratio changing? Disciplines where data need a place to live and thrive. Disruptive innovations start where there's a need that the existing market can't or simply won't address.
That's where the action is likely to be—and to be blunt, most of the reason I'm not wading through that Berkeley report is that it doesn't tell me a thing about where I believe the action is.
Still, there are some good bits about data in there, so the executive summary is worth a skim.

One response so far

Tidbits, 1 February 2010

Feb 01 2010 Published by under Tidbits

Happy Groundhog's Day Eve! Or something.

If you've got a link that belongs in a Trogool tidbits round up, drop me a comment or tag it "trogool" on Thanks!

No responses yet