This year’s been a total bummer. Hopefully 2007 will be better.
Blogging resumes the week of 1/8/07.
This year’s been a total bummer. Hopefully 2007 will be better.
Blogging resumes the week of 1/8/07.
There has been a great deal of discussion of late about the impact of self-archiving on library journal subscriptions. Obviously, this is of great interest to journal publishers who do not want to wake up one morning, rub the sleep from their eyes, and find out over their first cup of coffee at work that libraries have en masse canceled subscriptions because a "tipping point" has been reached. Likewise, open access advocates do not want journal publishers to panic at the prospect of cancellations and try to turn back the clock on liberal self-archiving policies. So, this is not a scenario that any one wants, except those who would like to simply scrap the existing journal publishing system and start over with a digital tabula rosa.
So, deep breath: Is the end near?
This question hinges on another: Will libraries accept any substitute for a journal that does not provide access to the full, edited, and peer-reviewed contents of that journal?
If the answer is "yes," publishers better get out their survival kits and hunker down for the digital nuclear winter or else change business practices to embrace the new reality. Attempts to fight back by rolling back the clock may just make the situation worse: the genie is out of the bottle.
If the answer is "no," preprints pose no threat, but postprints may under some difficult to attain circumstances.
It is unlikely that a critical mass of author created postprints (i.e., author makes the preprint look like the postprint) will ever emerge. Authors would have to be extremely motivated to have this occur. If you don’t believe me, take a Word file that you submitted to a publisher and make it look exactly like the published article (don’t forget the pagination because that might be a sticking point for libraries). That leaves publisher postprints (generally PDF files).
For the worst to happen, every author of every paper published in a journal would have to self-archive the final publisher PDF file (or the publishers themselves would have to do it for the authors under mandates).
But would that be enough? Wouldn’t the permanence and stability of the digital repositories housing these postprints be of significant concern to libraries? If such repositories could not be trusted, then libraries would have to attempt to archive the postprints in question themselves; however, since postprints are not by default under copyright terms that would allow this to happen (e.g., they are not under Creative Commons Licenses), libraries may be barred from doing so. There are other issues as well: journal and issue browsing capabilities, the value-added services of indexing and abstracting services, and so on. For now, let’s wave our hands briskly and say that these are all tractable issues.
If the above problems were overcome, a significant one remains: publishers add value in many ways to scholarly articles. Would libraries let the existing system of journal publishing collapse because of self-archiving without a viable substitute for these value-added functions being in place?
There have been proposals for and experiments with overlay journals for some time, as well other ideas for new quality control strategies, but, to date, none have caught fire. Old-fashioned peer review, copy editing and fact checking, and publisher-based journal design and production still reign, even among the vast majority of e-journals that are not published by conventional publishers. In the Internet age, nothing technological stops tens of thousands of new e-journals using open source journal management software from blooming, but they haven’t so far, have they? Rather, if you use a liberal definition of open access, there are about 2,500 OA journals—a significant achievement; however, there are questions about the longevity of such journals if they are published by small non-conventional publishers such as groups of scholars (e.g., see "Free Electronic Refereed Journals: Getting Past the Arc of Enthusiasm"). Let’s face it—producing a journal is a lot of work, even a small journal that only publishes less than a hundred papers a year.
Bottom line: a perfect storm is not impossible, but it is unlikely.
The Public Library of Science has released a beta version of its innovative PLoS ONE journal.
Why innovative? First, it’s a multidisciplinary scientific journal, with published articles covering subjects that range from Biochemistry to Virology. Second, it’s a participative journal that allows registered users to annotate and initiate discussions about articles. Open commentary and peer-review have been previously implemented in some e-journals (e.g, see JIME: An Interactive Journal for Interactive Media), but PLoS ONE is the most visible of these efforts and, given PLoS’s reputation for excellence, it lends credibility to a concept that has yet to catch fire in the journal publishing world. A nice feature is the “Most Annotated” tab on the home page that highlights articles that have garnered reader commentary. Third, it’s an open access journal in the full sense of the term, with all articles under the least restrictive Creative Commons license, the Creative Commons Attribution License.
The beta site is a bit slow, probably due to significant interest, so expect some initial browsing delays.
Congratulations to PLoS on PLoS ONE. It’s journal worth keeping an eye on.
The Electronic Publishing Working Group of the Deutsche Initiative für Netzwerkinformation (DINI) has released an English draft of its DINI-Certificate Document and Publication Services 2007.
It outlines criteria for repository author support; indexing; legal aspects; long-term availability; logs and statistics; policies; security, authenticity and data integrity; and service visibility. It also provides examples.
CrossRef has made a DOI finding tool publicly available. It’s called Simple-Text Query. You can get the details at Barbara Quint’s article "Linking Up Bibliographies: DOI Harvesting Tool Launched by CrossRef."
What caught my eye in Quint’s article was this: "Users can enter whole bibliographies with citations in almost any bibliographic format and receive back the matching Digital Object Identifiers (DOIs) for these references to insert into their final bibliographies."
Well not exactly. I cut and pasted just the "9 Repositories, E-Prints, and OAI" section of the Scholarly Electronic Publishing Bibliography into Simple-Text Query. Result: error message. I had exceeded the 15,360 character limit. So, suggestion one: put the limit on the Simple-Text Query page.
So them I counted out 15,360 characters of the section and pasted that. Just kidding. I pasted the first six references. Result?
Alexander, Martha Latika, and J. N. Gautam. “Institutional Repositories for Scholarly Communication: Indian Initiatives.” Serials: The Journal for the Serials Community 19, no. 3 (2006): 195-201.
No doi match found.Allard, Suzie, Thura R. Mack, and Melanie Feltner-Reichert. “The Librarian’s Role in Institutional Repositories: A Content Analysis of the Literature.” Reference Services Review 33, no. 3 (2005): 325-336.
doi:10.1108/00907320510611357
http://dx.doi.org/10.1108/00907320510611357Allen, James. “Interdisciplinary Differences in Attitudes towards Deposit in Institutional Repositories.” Manchester Metropolitan University, 2005.
http://eprints.rclis.org/archive/00005180/
Reference not parsedAllinson, Julie, and Roddy MacLeod. “Building an Information Infrastructure in the UK.” Research Information (October/November 2006).
http://www.researchinformation.info/rioctnov06digital.html
Reference not parsedAnderson, Greg, Rebecca Lasher, and Vicky Reich. “The Computer Science Technical Report (CS-TR) Project: A Pioneering Digital Library Project Viewed from a Library Perspective.” The Public-Access Computer Systems Review 7, no. 2 (1996): 6-26.
http://epress.lib.uh.edu/pr/v7/n2/ande7n2.html
Reference not parsedAndreoni, Antonella, Maria Bruna Baldacci, Stefania Biagioni, Carlo Carlesi, Donatella Castelli, Pasquale Pagano, Carol Peters, and Serena Pisani. “The ERCIM Technical Reference Digital Library: Meeting the Requirements of a European Community within an International Federation.” D-Lib Magazine 5 (December 1999).
http://www.dlib.org/dlib/december99/peters/12peters.html
Reference not parsed
Hmmm. According to Quint’s article:
I asked Brand if CrossRef could reach open access material. She assured me it could, but it clearly did not give the free and sometimes underdefined material any preference.
Looks like the open access capabilities may need some fine tuning. D-Lib Magazine and The Public-Access Computer Systems Review are not exactly obscure e-journals. Since my references are formatted in the Chicago style by EndNote, I don’t think that the reference format is the issue. In fact, Quint’s article says: "The Simple-Text Query can retrieve DOIs for journal articles, books, and chapters in any reference citation style, although it works best with standard styles."
Conclusion: I play with it some more, but Simple-Text Query may be best for conventional, mainstream journal references.
Using a new purchase service, individuals will be able to purchase JSTOR articles for modest fees (currently $5.00 to $14.00) from publishers that participate in this service. JSTOR describes the service as follows:
An extension of JSTOR’s efforts to better serve scholars is a new article purchase service. This service is an opt-in program for JSTOR participating publishers and will enable them to sell single articles for immediate download. Researchers following direct links to articles will be presented with an option to purchase an article from the publisher if the publisher chooses to participate in this program and if the specific content requested is available through the program. The purchase option will only be presented if the user does not have access to the article. Prior to completing an article purchase users are prompted to first check the availability of the article through a local library or an institutional affiliation with JSTOR.
International Network for the Availability of Scientific Publications (INASP) has announced that its journals will be included in the CrossRef linking service.
In an INASP press release, Pippa Smart, INASP’s Head of Publishing Initiatives, said:
For journals that are largely invisible to most of the scientific community the importance of linking cannot be overstressed. We are therefore delighted to be working with CrossRef to promote discovery of journals published in the less developed countries. We believe that an integrated discovery mechanism which includes journals from all parts of the world is vital to global research—not only benefiting the editors and publishers with whom we work.
The MOIMS-Repository Audit and Certification BOF mailing list (Moims-rac) has been established to foster the development of an ISO standard for the audit and certification of digital information repositories.
The Wall Street Journal reports that Attributor Corp "has begun testing a system to scan the billions of pages on the Web for clients’ audio, video, images and text—potentially making it easier for owners to request that Web sites take content down or provide payment for its use."
The company will use specialized digital fingerprinting technology in its copy detection service, which will become available in the first quarter of 2007. By the end of December, it will have about 10 billion Web pages in its detection index.
An existing competing service, Copyscape, offers both free and paid copy detection.
Source: Delaney, Kevin J. "Copyright Tool Will Scan Web For Violations." The Wall Street Journal, 18 December 2006, B1.
Matt Pasiewicz and CNI have made available digital audio interviews with a number of prominent attendees at the 2006 Fall CNI Task Force Meeting. Selected interviews are below. More are available on Pasiewicz’s blog.
Version 66 of the Scholarly Electronic Publishing Bibliography is now available. This selective bibliography presents over 2,830 articles, books, and other printed and electronic sources that are useful in understanding scholarly electronic publishing efforts on the Internet.
The SEPB URL has changed:
http://sepb.digital-scholarship.org/
or http://www.digital-scholarship.org/sepb/sepb.html
There is a mirror site at:
http://www.digital-scholarship.com/sepb/sepb.html
The Scholarly Electronic Publishing Weblog URL has also changed:
http://sepw.digital-scholarship.org/
or http://www.digital-scholarship.org/sepb/sepw/sepw.htm
There is a mirror site at:
http://www.digital-scholarship.com/sepb/sepw/sepw.htm
The SEPW RSS feed is unaffected.
Changes in This Version
The bibliography has the following sections (revised sections are marked with an asterisk):
Table of Contents
1 Economic Issues*
2 Electronic Books and Texts
2.1 Case Studies and History*
2.2 General Works*
2.3 Library Issues*
3 Electronic Serials
3.1 Case Studies and History*
3.2 Critiques
3.3 Electronic Distribution of Printed Journals*
3.4 General Works*
3.5 Library Issues*
3.6 Research*
4 General Works*
5 Legal Issues
5.1 Intellectual Property Rights*
5.2 License Agreements*
6 Library Issues
6.1 Cataloging, Identifiers, Linking, and Metadata*
6.2 Digital Libraries*
6.3 General Works*
6.4 Information Integrity and Preservation*
7 New Publishing Models*
8 Publisher Issues*
8.1 Digital Rights Management*
9 Repositories, E-Prints, and OAI*
Appendix A. Related Bibliographies
Appendix B. About the Author*
Appendix C. SEPB Use Statistics
Scholarly Electronic Publishing Resources includes the following sections:
Cataloging, Identifiers, Linking, and Metadata
Digital Libraries*
Electronic Books and Texts*
Electronic Serials
General Electronic Publishing
Images
Legal
Preservation
Publishers
Repositories, E-Prints, and OAI*
SGML and Related Standards
Further Information about SEPB
The HTML version of SEPB is designed for interactive use. Each major section is a separate file. There are links to sources that are freely available on the Internet. It can be searched using a Google Search Engine. Whether the search results are current depends on Google’s indexing frequency.
In addition to the bibliography, the HTML document includes:
(1) Scholarly Electronic Publishing Weblog (biweekly list of new resources; also available by e-mail—see second URL—and RSS Feed—see third URL)
http://sepw.digital-scholarship.org/
http://www.feedburner.com/fb/a/emailverifySubmit?feedId=51756
http://feeds.feedburner.com/ScholarlyElectronicPublishingWeblogrss
(2) Scholarly Electronic Publishing Resources (directory of over 270 related Web sites)
http://sepr.digital-scholarship.org/
(3) Archive (prior versions of the bibliography)
http://www.digital-scholarship.org/sepb/archive/sepa.htm
The 2005 annual PDF file is designed for printing. The printed bibliography is over 210 pages long. The PDF file is over 560 KB.
http://www.digital-scholarship.org/sepb/archive/60/sepb.pdf
Related Article
An article about the bibliography has been published in The Journal of Electronic Publishing:
The latest update of the Scholarly Electronic Publishing Weblog (SEPW) is now available, which provides information about new scholarly literature and resources related to scholarly electronic publishing, such as books, journal articles, magazine articles, newsletters, technical reports, and white papers. Especially interesting are: The Complete Copyright Liability Handbook for Librarians and Educators, "Copyright Concerns in Online Education: What Students Need to Know," Digital Archiving: From Fragmentation to Collaboration, "Fixing Fair Use," "Mass Digitization of Books," MLA Task Force on Evaluating Scholarship for Tenure and Promotion, "Open Access: Why Should We Have It?," "Predictions for 2007," "Readers’ Attitudes to Self-Archiving in the UK," "The Rejection of D-Space: Selecting Theses Database Software at the University of Calgary Archives," "Taming the Digital Beast," and Understanding Knowledge as a Commons: From Theory to Practice.
The SEPW URL has changed. Use:
http://sepw.digital-scholarship.org/
or http://www.digital-scholarship.org/sepb/sepw/sepw.htm
There is a mirror site at:
http://www.digital-scholarship.com/sepb/sepw/sepw.htm
The RSS feed is unaffected.
For weekly updates about news articles, Weblog postings, and other resources related to digital culture (e.g., copyright, digital privacy, digital rights management, and Net neutrality), digital libraries, and scholarly electronic publishing, see the latest DigitalKoans Flashback posting.
Lawrence Lessig’s Code: Version 2.0 is out. This update of the now classic Code and Other Laws of Cyberspace was written using a Wiki, with Lessig editing and refining that digital text.
The resulting book is under a Creative Commons Attribution-ShareAlike 2.5 License.
It can be freely downloaded in PDF form. Later, the final version of the book will be available on a second Wiki.
What is Collex? The project’s About page describes it in part as follows:
Collex is a set of tools designed to aid students and scholars working in networked archives and federated repositories of humanities materials: a sophisticated COLLections and EXhibits mechanism for the semantic web.
Collex allows users to collect, annotate, and tag online objects and to repurpose them in illustrated, interlinked essays or exhibits. It functions within any modern web browser without recourse to plugins or downloads and is fully networked as a server-side application. By saving information about user activity (the construction of annotated collections and exhibits) as ‘remixable’ metadata, the Collex system writes current practice into the scholarly record and permits knowledge discovery based not only on the characteristics or ‘facets’ of digital objects, but also on the contexts in which they are placed by a community of scholars.
A detailed description of the project is available in "COLLEX: Semantic Collections & Exhibits for the Remixable Web."
You can see Collex in action at the NINES (a Networked Interface for Nineteenth-Century Electronic Scholarship) project, which also uses IVANHOE ("a shared, online playspace for readers interested in exploring how acts of interpretation get made and reflecting on what those acts mean or might mean") and Juxta ("a cross-platform tool for collating and analyzing any kind or number of textual objects").
The About 9s page identifies key objectives of the NINES project as follows:
- It will create a robust framework to support the authority of digital scholarship and its relevance in tenure and other scholarly assessment procedures.
- It will help to establish a real, practical publishing alternative to the paper-based academic publishing system, which is in an accelerating state of crisis.
- It will address in a coordinated and practical way the question of how to sustain scholarly and educational projects that have been built in digital forms.
- It will establish a base for promoting new modes of criticism and scholarship promised by digital tools.
A message by Liddy Nevile on DC-General has spawned an interesting thread about the need to have a metadata scheme that describes people. Other participants note related efforts, such as BIO, the FOAF Vocabulary Specification, GEDCOM, the North Carolina Encoded Archival Context (EAC) Project, and the XHTML Friends Network.
The MLA Task Force on Evaluating Scholarship for Tenure and Promotion has issued an important report. (The MLA is the Modern Language Association of America.)
Here’s some background on the report from its Executive Summary:
In 2004 the Executive Council of the Modern Language Association of America created a task force to examine current standards and emerging trends in publication requirements for tenure and promotion in English and foreign language departments in the United States. The council’s action came in response to widespread anxiety in the profession about ever-rising demands for research productivity and shrinking humanities lists by academic publishers, worries that forms of scholarship other than single-authored books were not being properly recognized, and fears that a generation of junior scholars would have a significantly reduced chance of being tenured. The task force was charged with investigating the factual basis behind such concerns and making recommendations to address the changing environment in which scholarship is being evaluated in tenure and promotion decisions.
The task force made 20 key recommendations, including:
3. The profession as a whole should develop a more capacious conception of scholarship by rethinking the dominance of the monograph, promoting the scholarly essay, establishing multiple pathways to tenure, and using scholarly portfolios. . . .
4. Departments and institutions should recognize the legitimacy of scholarship produced in new media, whether by individuals or in collaboration, and create procedures for evaluating these forms of scholarship. . . .
15. The task force encourages further study of the unfulfilled parts of its charge with respect to multiple submissions of manuscripts and comparisons of the number of books published by university presses between 1999 and 2005.
16. The task force recommends establishing concrete measures to support university presses. . . .
19. The task force encourages discussion of the current form of the dissertation (as a monograph-in-progress) and of the current trends in the graduate curriculum.
The Creative Commons has redone its Web site using WordPress and added a new feature: CC Labs, which features development projects.
Current projects include the DHTML License Chooser, the Freedoms License Generator, and the Metadata Lab. (Consulting the Creative Commons Licenses page before using these tools will give you a preview of your license options.)
The symbols used to represent the CC licenses have changed. For example, here’s the Creative Commons Attribution-NonCommercial 2.5 License symbol.
Read more about these changes in Lawrence Lessig’s blog posting.
The South African Learning Commons has published a multimedia introduction to copyright, open content, and open source issues for kids.
It is available for Linux, Mac, and Windows computers, and it is under the Creative Commons Attribution Share-Alike South Africa license.
The STARGATE project has issued its final report. Here’s a brief summary of the project from the Executive Summary:
STARGATE (Static Repository Gateway and Toolkit) was funded by the Joint Information Systems Committee (JISC) and is intended to demonstrate the ease of use of the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) Static Repository technology, and the potential benefits offered to publishers in making their metadata available in this way This technology offers a simpler method of participating in many information discovery services than creating fully-fledged OAI-compliant repositories. It does this by allowing the infrastructure and technical support required to participate in OAI-based services to be shifted from the data provider (the journal) to a third party and allows a single third party gateway provider to provide intermediation for many data providers (journals).
To support the its work, the project developed tools and supporting documentation, which can be found below:
Details about the Open Repositories 2007 conference sessions are now available, including keynotes, poster sessions, presentations, and user groups. For DSpace, EPrints, and Fedora techies, the user group sessions look like a don’t miss with talks by luminaries such as John Ockerbloom and MacKenzie Smith. The presentations sessions include talks by Andrew Treloar, Carl Lagoze and Herbert Van de Sompel, Leslie Johnston, Simeon Warner among other notables. Open Repositories 2007 will be held in San Antonio, January 23-26.
Hopefully, the conference organizers plan to make streaming audio and/or video files available post-conference, but PowerPoints, as was the case for Open Repositories 2006, would also be useful.