NISO Shared E-Resource Understanding Working Group

If you are tired of negotiating a license for every commercial information product that you purchase, there may be hope on the horizon.

The NISO Shared E-Resource Understanding Working Group (SERU), co-chaired by Karla Hahn, Association of Research Libraries, and Judy Luther, Informed Strategies, is addressing this issue.

Here is the group’s charge:

The working group is charged with developing Recommended Practices to be used to support a new mechanism for publishers to sell e-resources without licenses if they feel their perception of risk has been adequately addressed by current law and developing norms of behavior.

The document will be an expression of a set of shared understandings of publisher and library expectations regarding the sale of an electronic resource subscription. Negotiation between publisher perspectives and library perspectives will be needed to develop a useful set of practices.

The working group will build on considerable work to identify key elements of a best practices document already begun during a one-day meeting sponsored by ARL, ALPSP, SSP, and SPARC. All of the participants in that scoping meeting expressed a strong desire to continue to work on this project and form the proposed working group to develop best practices.

A recent article provides more details about SERU as does its FAQ.

There is also a mailing list. Send a message to SERUinfo-subscribe@list.niso.org to subscribe.

Economists’ Self-Archiving Behavior

Ted C. Bergstrom and Rosemarie Lavaty have deposited an eprint in eScholarship that studies the self-archiving behavior of economists ("How Often Do Economists Self-Archive?").

They summarize their findings in the paper’s abstract:

To answer the question of the paper’s title, we looked at the tables of contents from two recent issues of 33 economics journals and attempted to find a freely available online version of each article. We found that about 90 percent of articles in the most-cited economics journals and about 50 percent of articles in less-cited journals are available. We conduct a similar exercise for political science and find that only about 30 percent of the articles are freely available. The paper reports a regression analysis of the effects of author and article characteristics on likelihood of posing and it discusses the implications of self-archiving for the pricing of subscription-based academic journals.

Their conclusion suggests that significant changes in journal pricing could result from self-archiving:

As more content becomes available in open access archives, publishers are faced with greater availability of close substitutes for their products and library demand for journals is likely to become more price-elastic. The increased price-responsiveness means that profit-maximizing prices will fall. As a result, it can be hoped that commercial publishers will no longer be able to charge subscription prices greatly in excess of average cost. Thus the benefits of self-archiving to the academic community are twofold. There is the direct effect of making a greater portion of the body of research available to scholars everywhere and the secondary effect of reducing the prices charged by publishers who exploit their monopoly power.

Senate Poised to Slash NDIIPP Funding

The Disruptive Library Technology Jester and Free Range Librarian blogs have sounded a warning that $47 million of unobligated current-year funding for the National Digital Information Infrastructure and Preservation Program is in serious danger of being rescinded.

House Joint Resolution 20 has been passed in the House and is now being considered by the Senate.

The NDIIPP 2005 Annual Review provides a detailed look at the work of this important Library of Congress program.

See Murray’s Jester posting for the cutback details and check out his protest letter to Ohio’s Senators.

American Society for Cell Biology Issues Open Access Position Paper

The American Society for Cell Biology (ASCB) has issued an open access position paper ("ASCB Position on Public Access to Scientific Literature").

Here is an excerpt:

The ASCB believes strongly that barriers to scientific communication slow scientific progress. The more widely scientific results are disseminated, the more readily they can be understood, applied, and built upon. The sooner findings are shared, the faster they will lead to new scientific insights and breakthroughs. This conviction has motivated the ASCB to provide free access to all of the research articles in Molecular Biology of the Cell two months after publication, which it has done since 2001. . . .

Some publishers argue that providing free access to their journal’s content will catastrophically erode their revenue base. The experience of many successful research journals demonstrates otherwise; these journals make their online content freely available after a short embargo period that protects subscription revenue. For example, as noted above, the content of Molecular Biology of the Cell is free to all after only two months, yet the journal remains not only financially sound, but profitable. The data clearly show that free access and profitability are not mutually exclusive.

Our goal should be to make research articles freely available as soon as feasible so that science and the public benefit from their expanded use and application. At the same time, it is important that nonprofit societies and other publishers generate sufficient revenues to sustain the costs of reviewing and publishing articles. We believe that a six-month embargo period represents a reasonable compromise between the financial requirements of supporting a journal and the need for access to current research.

Princeton Joins Google Book Search Library Project

The Princeton University Library has announced that it has joined the Google Book Search Library Project.

From the press release:

A new partnership between the Princeton University Library and Google soon will make approximately 1 million books in Princeton’s collection available online in a searchable format.

In a move designed to open Princeton’s vast resources to a broad international audience, the library will work with Google over the next six years to digitize books that are in the public domain and no longer under copyright. . . .

"We will be working with Google in the next several months to choose the subject areas to be digitized and the timetable for the work," [Karin] Trainer said. "Library staff, faculty and students will be invited to suggest which parts of our distinctive collections should be digitized."

Princeton is the 12th institution to join the Google Books Library Project. Books available in the Google Book Search also include those from collections at Harvard, Oxford, Stanford, the University of California, the University of Michigan, the University of Texas-Austin, the University of Virginia, the University of Wisconsin-Madison, the New York Public Library, the University Complutense of Madrid and the National Library of Catalonia.

Google also announced the new partnership in its Inside Google Book Search blog.

Scholarly Electronic Publishing Weblog Update (2/5/07)

The latest update of the Scholarly Electronic Publishing Weblog (SEPW) is now available, which provides information about new scholarly literature and resources related to scholarly electronic publishing, such as books, journal articles, magazine articles, newsletters, technical reports, and white papers. Especially interesting are: Community Created Content: Law, Business and Policy, "A Comparison of OpenURL Link Resolvers: The Results of a University of Connecticut Libraries Environmental Scan," "Continuing Use of Print-Only Information by Researchers," "A Dublin Core Application Profile for Scholarly Works," "Mandate Momentum in 2007," and "U.S. Institutional Repositories: A Census."

For weekly updates about news articles, Weblog postings, and other resources related to digital culture (e.g., copyright, digital privacy, digital rights management, and Net neutrality), digital libraries, and scholarly electronic publishing, see the latest DigitalKoans Flashback posting.

Recent Object Reuse and Exchange (ORE) Documents

In a previous posting, I discussed the Open Archives Initiative’s Object Reuse and Exchange (ORE) project. ORE is worth watching closely.

Two new documents were released this January:

  • "Report of the January 2007 ORE-TC Meeting," which is: "A detailed report of the results of the meeting of OAI-ORE Technical Committee describing features and requirements of the ORE model and its context in the Web Architecture."
  • "Open Repositories 2007," which is: "A presentation describing OAI-ORE and progress based on the January 2007 ORE Technical Committee Meeting."

Petition to European Commission to Support Open Access Tops 10,000 Signatures

A petition to the European Commission asking it to support the European Union’s "Study on the Economic and Technical Evolution of the Scientific Publication Markets of Europe" has been signed by more than 10,000 people.

From the press release:

Nobel laureates Harold Varmus and Rich Roberts are among the more than ten thousand concerned researchers, senior academics, lecturers, librarians, and citizens from across Europe and around the world who are signing an internet petition calling on the European Commission to adopt polices to guarantee free public access to research results and maximise the worldwide visibility of European research.

Organisations too are lending their support, with the most senior representatives from over 500 education, research and cultural organisations in the world adding their weight to the petition, including CERN, the UK’s Medical Research Council, the Wellcome Trust, the Italian Rector’s Conference, the Royal Netherlands Academy for Arts & Sciences (KNAW) and the Swiss Academy for the Humanities and Social Sciences (SAGW), alongside the petition’s sponsors, SPARC Europe, JISC, the SURF Foundation, the German Research Foundation (DFG) and the Danish Electronic Research Library (DEFF).

From the petition

Research funding agencies have a central role in determining researchers’ publishing practices. Following the lead of the NIH and other institutions, they should promote and support the archiving of publications in open repositories, after a (possibly domain-specific) time period to be discussed with publishers. This archiving could become a condition for funding.

The following actions could be taken at the European level: (i) Establish a European policy mandating published articles arising from EC-funded research to be available after a given time period in open access archives, and (ii) Explore with Member States and with European research and academic associations whether and how such policies and open repositories could be implemented.

More signatures are needed, especially from EU organizations and individuals.

DOE and British Library to Develop Science.world Portal

The U.S. Department of Energy (DOE) and the British Library have signed an agreement to develop an portal to international science resources called Science.world.

From the press release:

Called ‘Science.world,’ the planned resource would be available for use by scientists in all nations and by anyone interested in science. The approach will capitalise on existing technology to search vast collections of science information distributed across the globe, enabling much-needed access to smaller, less well-known sources of highly valuable science. Following the model of Science.gov, the U.S. interagency science portal that relies on content published by each participating agency, ‘Science.world’ will rely on scientific resources published by each participating nation. Other countries have been invited to participate in this international effort. . . .

Objectives of the ‘Science.world’ initiative are to:

  • Search dispersed, electronic collections in various science disciplines;
  • Provide direct, seamless and free searching of open-source collections and portals;
  • Build upon existing and already successful national models for searching;
  • Complement existing information collections and systems; and
  • Raise the visibility and usage of individual sources of quality science information.

New Yorker Google Book Search Article

The New Yorker has published an article about Google Book Search by Jeffrey Toobin in its February 5, 2007 issue ("Google’s Moon Shot: The Quest for the Universal Library").

Here’s a quote from the article:

Google asserts that its use of the copyrighted books is "transformative," that its database turns a book into essentially a new product. "A key part of the line between what’s fair use and what’s not is transformation," Drummond said. "Yes, we’re making a copy when we digitize. But surely the ability to find something because a term appears in a book is not the same thing as reading the book. That’s why Google Books is a different product from the book itself." In other words, Google says that being able to search books on its site—which it describes as the equivalent of a giant library card catalogue—is not the same as making the books themselves available. But the publishers cite another factor in fair-use analysis: the amount of the copyrighted work that is used in the creation of the new one. Google is copying entire books, which doesn’t sound "fair" to the plaintiff publishers and authors.

Draft White Paper on Acquisitions and Electronic Resource Management Systems Interoperability

The Digital Library Federation’s Electronic Resource Management Initiative Phase II Steering Committee has released a draft white paper on the interoperability of ILS acquisition modules and electronic resource management systems.

Here is the introduction:

Electronic resource management systems are becoming an important tool in many libraries. Commercial ERMS development has been driven in part by the lack of accommodation within integrated library systems for elements specific to electronic resources. Financial aspects of acquiring e-resources, in particular, necessitate recording an array of data not suited to ILS acquisitions modules. Unlike other data recorded in an ERMS such as licensing and administrative terms, a moderate percentage of acquisitions data is redundant, being populated in ILS during the acquisitions process, while also being accommodated within ERMS in accordance with the data structure detailed in Electronic Resource Management: Report of the DLF Electronic Resource Management Initiative (Digital Library Federation, 2004). ERMS implementers are eager to automate the process by which acquisitions data move from their ILS into their ERMS. This interest has grown substantially over the past few months as the prospect of connecting financial data to usage statistics has been facilitated through the Standardized Usage Statistics Harvesting Initiative (SUSHI), a NISO draft standard.

This white paper describes workflows at four libraries; reports on conversations held with product managers and other relevant staff of the leading ERMS; summarizes common themes; and suggests next steps. The paper is a draft for comment; it is hoped that those with interest in this area will provide insight to further this investigation.

OAIster Hits 10,000,000 Records

Excerpt from the press release:

We live in an information-driven world—one in which access to good information defines success. OAIster’s growth to 10 million records takes us one step closer to that goal.

Developed at the University of Michigan’s Library, OAIster is a collection of digital scholarly resources. OAIster is also a service that continually gathers these digital resources to remain complete and fresh. As global digital repositories grow, so do OAIster’s holdings.

Popular search engines don’t have the holdings OAIster does. They crawl web pages and index the words on those pages. It’s an outstanding technique for fast, broad information from public websites. But scholarly information, the kind researchers use to enrich their work, is generally hidden from these search engines.

OAIster retrieves these otherwise elusive resources by tapping directly into the collections of a variety of institutions using harvesting technology based on the Open Archives Initiative (OAI) Protocol for Metadata Harvesting. These can be images, academic papers, movies and audio files, technical reports, books, as well as preprints (unpublished works that have not yet been peer reviewed). By aggregating these resources, OAIster makes it possible to search across all of them and return the results of a thorough investigation of complete, up-to-date resources. . . .

OAIster is good news for the digital archives that contribute material to open-access repositories. "[OAIster has demonstrated that]. . . OAI interoperability can scale. This is good news for the technology, since the proliferation is bound to continue and even accelerate," says Peter Suber, author of the SPARC Open Access Newsletter. As open-access repositories proliferate, they will be supported by a single, well-managed, comprehensive, and useful tool.

Scholars will find that searching in OAIster can provide better results than searching in web search engines. Roy Tennant, User Services Architect at the California Digital Library, offers an example: "In OAIster I searched ‘roma’ and ‘world war,’ then sorted by weighted relevance. The first hit nailed my topic—the persecution of the Roma in World War II. Trying ‘roma world war’ in Google fails miserably because Google apparently searches ‘Rome’ as well as ‘Roma.’ The ranking then makes anything about the Roma people drop significantly, and there is nothing in the first few screens of results that includes the word in the title, unlike the OAIster hit."

OAIster currently harvests 730 repositories from 49 countries on 6 continents. In three years, it has more than quadrupled in size and increased from 6.2 million to 10 million in the past year. OAIster is a project of the University of Michigan Digital Library Production Service.

Orphan Works Challenge Fails

The U.S. Court of Appeals for the Ninth Circuit has denied an appeal of Kahle v. Gonzales, leaving the legal status of orphan works unchanged. The plaintiffs’ attorneys were Jennifer Stisa Granick, Lawrence Lessig, and Christopher Sprigman.

Eric Auchard’s article "U.S. Court Upholds Copyright Law on ‘Orphan Works’" gives an overview of the Ninth’s decision.

The opinion is also available. Here is an excerpt:

Plaintiffs appeal from the district court’s dismissal of their complaint. They allege that the change from an "opt-in" to an "opt-out" copyright system altered a traditional contour of copyright and therefore requires First Amendment review under Eldred v. Ashcroft, 537 U.S. 186, 221 (2003). They also allege that the current copyright term violates the Copyright Clause’s "limited Times" prescription. . . .

Arguments similar to Plaintiffs’ were presented to the Supreme Court in Eldred, which affirmed the constitutionality of the Copyright Term Extension Act against those attacks. The Supreme Court has already effectively addressed and denied Plaintiffs’ arguments. . . .

In March 2004, Plaintiffs Brewster Kahle, Internet Archive, Richard Prelinger, and Prelinger Associates, Inc. filed an amended complaint seeking declaratory judgment and injunctive relief. Brewster Kahle and Internet Archive have built an "Internet library" that offers free access to digitized audio, books, films, websites, and software. Richard Prelinger and Prelinger Associates make digital versions of "ephemeral" films available for free on the internet. Each Plaintiff provides, or intends to provide, access to works that allegedly have little or no commercial value but remain under copyright protection. The difficulty and expense of obtaining permission to place those works on the Internet is overwhelming; ownership of these "orphan" works is often difficult, and sometimes impossible, to ascertain. . . .

Plaintiffs also argue that they should be allowed to present evidence that the present copyright term violates the Copyright Clause’s "limited Times" prescription as the Framers would have understood it. That claim was not directly at issue in Eldred, though Justice Breyer discussed it extensively in his dissent. See Eldred, 537 U.S. at 243. Plaintiffs assert all existing copyrights are effectively perpetual. . . .

Both of Plaintiffs’ main claims attempt to tangentially relitigate Eldred. However, they provide no compelling reason why we should depart from a recent Supreme Court decision.

Creative Commons India to Launch on 1/26/07

The Creative Commons India will be launched on Friday.

From "Creative Commons Readies for India Launch":

Creative Commons-India’s project head Shishir K Jha, assistant professor at the IIT’s Shailesh J. Mehta School of Management, said the project would focus on three specific areas in India.

These are—centres of higher education like the seven IITs, regional technology institutes and management and other institutions. . . .

Creative Commons-India also plans to focus on non-profit and non-governmental organisations and corporates keen on adopting easier-to-share licences for the dissemination of their documents.

2006 PACS Review Use Statistics

The Public-Access Computer Systems Review (PACS Review) was a freely available e-journal, which I founded in 1989. It allowed authors to retain their copyrights, and it had a liberal copyright policy for noncommercial use. It’s last issue was published in 1998.

In 2006, there were 763,228 successful requests for PACS Review files, 2,091 average successful requests per day, 751,264 successful requests for pages, and 2,058 average successful requests for pages per day. (A request is for any type of file; a page request is for a content file, such as an HTML, PDF, or Word file). These requests came from 41,865 distinct host computers.

The requests came from 134 Internet domains. Leaving aside requests from unresolved numerical addresses, the top 15 domains were: .com (Commercial), .net (Networks), .edu (USA Higher Education), .cz (Czech Republic), .jp (Japan), .ca (Canada), .uk (United Kingdom), .au (Australia), .de (Germany), .nl (Netherlands), .org (Non Profit Making Organizations), .in (India), .my (Malaysia), .it (Italy), and .mx (Mexico). At the bottom were domains such as .ms (Montserrat), .fm (Micronesia), .nu (Niue), .ad (Andorra), and .az (Azerbaijan).

Rounded to the nearest thousand, there had previously been 3.5 million successful requests for PACS Review files.

This is the last time that use statistics will be reported for the PACS Review.

Fedora 2.2 Released

The Fedora Project has released version 2.2 of Fedora.

From the announcement:

This is a significant release of Fedora that includes a complete repackaging of the Fedora source and binary distribution so that Fedora can now be installed as a standalone web application (.war) in any web container. This is a first step in positioning Fedora to fit within a standard "enterprise system" environment. A new installer application makes it easy to setup and run Fedora. Fedora now uses Servlet Filters for authentication. To support digital object integrity, the Fedora repository can now be configured to calculate and store checksums for datastream content. This can be done globally, or on selected datastreams. The Fedora API also provides the ability to check content integrity based on checksums. The RDF-based Resource Index has been tuned for better performance. Also, a new high-performing triplestore, backed by Postgres, has been developed that can be plugged into the Resource Index. Fedora contains many other enhancements and bug fixes.

ScientificCommons.org: Access to Over 13 Million Digital Documents

ScientificCommons.org is an initiative of the Institute for Media and Communications Management at the University of St. Gallen. It indexes both metadata and full-text from global digital repositories. It uses OAI-PMH to identify relevant documents. The full-text documents are in PDF, PowerPoint, RTF, Microsoft Word, and Postscript formats. After being retrieved from their original repository, the documents are cached locally at ScientificCommons.org. It has indexed about 13 million documents from over 800 repositories.

Here are some additional features from the About ScientificCommons.org page:

Identification of authors across institutions and archives: ScientificCommons.org identifies authors and assigns them their scientific publications across various archives. Additionally the social relations between the authors will be extracted and displayed. . . .

Semantic combination of scientific information: ScientificCommons.org structures and combines the scientific data to knowledge areas with Ontology’s. Lexical and statistical methods are used to identify, extract and analyze keywords. Based on this processes ScientificCommons.org classifies the scientific data and uses it e.g. for navigational and weighting purposes.

Personalization services: ScientificCommons.org offers the researchers the possibilities to inform themselves about new publications via our RSS Feed service. They can customize the RSS Feed to a special discipline or even to personalized list of keywords. Furthermore ScientificCommons.org will provide an upload service. Every researcher can upload his publication directly to ScientificCommons.org and assign already existing publications at ScientificCommons.org to his own researcher profile.

New UC Report: The Promise of Value-based Journal Prices and Negotiation

The University of California libraries have released The Promise of Value-based Journal Prices and Negotiation: A UC Report and View Forward.

Here is the report’s abstract:

In pursuit of their scholarly communication agenda, the University of California ten-campus libraries have posited and tested the case that a journal’s institutional price can and should be related to its value to the academic enterprise. We developed and tested a set of metrics that comprise "value-based pricing" of scholarly journals. The metrics are the measurable impact of the journal, the transparent measures of production costs, the institutionally-based contributions to the journal, such as editorial labor, and the transaction efficiencies from consortial purchases. Initial modeling and use of the approaches are promising, leading the libraries to employ and further develop the approaches and share their work to date with the larger community.

This excerpt from press release provides further information:

The report describes a value-based approach that borrows from analysis done by Professors Ted Bergstrom (UC Santa Barbara) and R. Preston McAfee (Caltech) on journal cost-effectiveness (www.journalprices.com). The UC approach also includes suggestions for annual price increases that are tied to production costs; credits for institutionally-based contributions to the journal, such as editorial labor; and credits for business transaction efficiencies from consortial purchases.

Through the report the libraries ask how an explicit method can be established, validated, and communicated for aligning the purchase or license costs of scholarly journals with the value they contribute to the academy and the costs to create and deliver them. In addition to describing the work done to date, the report provides examples of potential cost savings and declares UC’s intention to pursue value-based prices in their negotiations with journal publishers. In addition, the report invites the academic community to work collectively to refine and improve these and other value-based approaches.

Notre Dame Institutional Digital Repository Phase I Final Report

The University of Notre Dame Libraries have issued a report about their year-long institutional repository pilot project. There is an abbreviated HTML version and a complete PDF version.

From the Executive Summary:

Here is the briefest of summaries regarding what we did, what we learned, and where we think future directions should go:

  1. What we did—In a nutshell we established relationships with a number of content groups across campus: the Kellogg Institute, the Institute for Latino Studies, Art History, Electrical Engineering, Computer Science, Life Science, the Nanovic Institute, the Kaneb Center, the School of Architecture, FTT (Film, Television, and Theater), the Gigot Center for Entrepreneurial Studies, the Institute for Scholarship in the Liberal Arts, the Graduate School, the University Intellectual Property Committee, the Provost’s Office, and General Counsel. Next, we collected content from many of these groups, "cataloged" it, and saved it into three different computer systems: DigiTool, ETD-db, and DSpace. Finally, we aggregated this content into a centralized cache to provide enhanced browsing, searching, and syndication services against the content.
  2. What we learned—We essentially learned four things: 1) metadata matters, 2) preservation now, not later, 3) the IDR requires dedicated people with specific skills, 4) copyright raises the largest number of questions regarding the fulfillment of the goals of the IDR.
  3. Where we are leaning in regards to recommendations—The recommendations take the form of a "Chinese menu" of options, and the options are be grouped into "meals." We recommend the IDR continue and include: 1) continuing to do the Electronic Theses & Dissertations, 2) writing and implementing metadata and preservation policies and procedures, 3) taking the Excellent Undergraduate Research to the next level, and 4) continuing to implement DigiTool. There are quite a number of other options, but they may be deemed too expensive to implement.

digitalculturebooks

The University of Michigan Press and the Scholarly Publishing Office of the University of Michigan Library, working together as the Michigan Digital Publishing Initiative, have established digitalculturebooks, which offers free access to digital versions of its published works (print works are fee-based). The imprint focuses on "the social, cultural, and political impact of new media."

The objectives of the imprint are to:

  • develop an open and participatory publishing model that adheres to the highest scholarly standards of review and documentation;
  • study the economics of Open Access publishing;
  • collect data about how reading habits and preferences vary across communities and genres;
  • build community around our content by fostering new modes of collaboration in which the traditional relationship between reader and writer breaks down in creative and productive ways.

Library Journal Academic Newswire notes in its article about digitalculturebooks:

While press officials use the term "open access," the venture is actually more "free access" than open at this stage. Open access typically does not require permission for reuse, only a proper attribution. UM director Phil Pochoda told the LJ Academic Newswire that, while no final decision has been made, the press’s "inclination is to ask authors to request the most restrictive Creative Commons license" for their projects. That license, he noted, requires attribution and would not permit commercial use, such as using it in a subsequent for-sale product, without permission. The Digital Culture Books web site currently reads that "permission must be received for any subsequent distribution."

The imprint’s first publication is The Best of Technology Writing 2006.

(Prior postings about digital presses.)

Has Authorama.com "Set Free" 100 Public Domain Books from Google Book Search?

In a posting on Google Blogoscoped, Philipp Lenssen has announced that he has put up 100 public domain books from Google Book Search on Authorama.

Regarding his action, Lenssen says:

In other words, Google imposes restrictions on these books which the public domain does not impose*. I’m no lawyer, and maybe Google can print whatever guidelines they want onto those books. . . and being no lawyer, most people won’t know if the guidelines are a polite request, or legally enforceable terms**. But as a proof of concept—the concept of the public domain—I’ve now ‘set free’ 100 books I downloaded from Google Book Search by republishing them on my public domain books site, Authorama. I’m not doing this out of disrespect for the Google Books program (which I think is cool, and I’ll credit Google on Authorama) but out of respect for the public domain (which I think is even cooler).

Since Lenssen has retained Google’s usage guidelines in the e-books, it’s unclear how they have been "set free," in spite of the following statement on Authorama’s Books from Google Book Search page:

The following books were downloaded from Google Book Search and are made available here as public domain. You can download, republish, mix and mash these books, for private or public, commercial or non-commercial use.

Leaving aside the above statement, Lenssen’s action appears to violate the following Google usage guideline, where Google asks that users:

Make non-commercial use of the files We designed Google Book Search for use by individuals, and we request that you use these files for personal, non-commercial purposes.

However, in the above guideline, Google uses the word "request," which suggests voluntary, rather than mandatory, compliance. Google also requests attribution and watermark retention.

Maintain attribution The Google ‘watermark’ you see on each file is essential for informing people about this project and helping them find additional materials through Google Book Search. Please do not remove it.

Note the use of the word "please."

It’s not clear how to determine if Google’s watermark remains in the Authorama files, but, given the retention of the usage guidelines, it likely does.

So, do Google’s public domain books really need to be "set free"? In its usage guidelines, Google appears to make compliance requests, not compliance requirements. Are such requests binding or not? If so, the language could be clearer. For example, here’s a possible rewording:

Make non-commercial use of the files Google Book Search is for individual use only, and its files can only be used for personal, non-commercial purposes. All other use is prohibited.

Will Self-Archiving Cause Libraries to Cancel Journal Subscriptions?

There has been a great deal of discussion of late about the impact of self-archiving on library journal subscriptions. Obviously, this is of great interest to journal publishers who do not want to wake up one morning, rub the sleep from their eyes, and find out over their first cup of coffee at work that libraries have en masse canceled subscriptions because a "tipping point" has been reached. Likewise, open access advocates do not want journal publishers to panic at the prospect of cancellations and try to turn back the clock on liberal self-archiving policies. So, this is not a scenario that any one wants, except those who would like to simply scrap the existing journal publishing system and start over with a digital tabula rosa.

So, deep breath: Is the end near?

This question hinges on another: Will libraries accept any substitute for a journal that does not provide access to the full, edited, and peer-reviewed contents of that journal?

If the answer is "yes," publishers better get out their survival kits and hunker down for the digital nuclear winter or else change business practices to embrace the new reality. Attempts to fight back by rolling back the clock may just make the situation worse: the genie is out of the bottle.

If the answer is "no," preprints pose no threat, but postprints may under some difficult to attain circumstances.

It is unlikely that a critical mass of author created postprints (i.e., author makes the preprint look like the postprint) will ever emerge. Authors would have to be extremely motivated to have this occur. If you don’t believe me, take a Word file that you submitted to a publisher and make it look exactly like the published article (don’t forget the pagination because that might be a sticking point for libraries). That leaves publisher postprints (generally PDF files).

For the worst to happen, every author of every paper published in a journal would have to self-archive the final publisher PDF file (or the publishers themselves would have to do it for the authors under mandates).

But would that be enough? Wouldn’t the permanence and stability of the digital repositories housing these postprints be of significant concern to libraries? If such repositories could not be trusted, then libraries would have to attempt to archive the postprints in question themselves; however, since postprints are not by default under copyright terms that would allow this to happen (e.g., they are not under Creative Commons Licenses), libraries may be barred from doing so. There are other issues as well: journal and issue browsing capabilities, the value-added services of indexing and abstracting services, and so on. For now, let’s wave our hands briskly and say that these are all tractable issues.

If the above problems were overcome, a significant one remains: publishers add value in many ways to scholarly articles. Would libraries let the existing system of journal publishing collapse because of self-archiving without a viable substitute for these value-added functions being in place?

There have been proposals for and experiments with overlay journals for some time, as well other ideas for new quality control strategies, but, to date, none have caught fire. Old-fashioned peer review, copy editing and fact checking, and publisher-based journal design and production still reign, even among the vast majority of e-journals that are not published by conventional publishers. In the Internet age, nothing technological stops tens of thousands of new e-journals using open source journal management software from blooming, but they haven’t so far, have they? Rather, if you use a liberal definition of open access, there are about 2,500 OA journals—a significant achievement; however, there are questions about the longevity of such journals if they are published by small non-conventional publishers such as groups of scholars (e.g., see "Free Electronic Refereed Journals: Getting Past the Arc of Enthusiasm"). Let’s face it—producing a journal is a lot of work, even a small journal that only publishes less than a hundred papers a year.

Bottom line: a perfect storm is not impossible, but it is unlikely.

Journal 2.0: PLoS ONE Beta Goes Live

The Public Library of Science has released a beta version of its innovative PLoS ONE journal.

Why innovative? First, it’s a multidisciplinary scientific journal, with published articles covering subjects that range from Biochemistry to Virology. Second, it’s a participative journal that allows registered users to annotate and initiate discussions about articles. Open commentary and peer-review have been previously implemented in some e-journals (e.g, see JIME: An Interactive Journal for Interactive Media), but PLoS ONE is the most visible of these efforts and, given PLoS’s reputation for excellence, it lends credibility to a concept that has yet to catch fire in the journal publishing world. A nice feature is the “Most Annotated” tab on the home page that highlights articles that have garnered reader commentary. Third, it’s an open access journal in the full sense of the term, with all articles under the least restrictive Creative Commons license, the Creative Commons Attribution License.

The beta site is a bit slow, probably due to significant interest, so expect some initial browsing delays.

Congratulations to PLoS on PLoS ONE. It’s journal worth keeping an eye on.

INASP Journals to Be Included in CrossRef

International Network for the Availability of Scientific Publications (INASP) has announced that its journals will be included in the CrossRef linking service.

In an INASP press release, Pippa Smart, INASP’s Head of Publishing Initiatives, said:

For journals that are largely invisible to most of the scientific community the importance of linking cannot be overstressed. We are therefore delighted to be working with CrossRef to promote discovery of journals published in the less developed countries. We believe that an integrated discovery mechanism which includes journals from all parts of the world is vital to global research—not only benefiting the editors and publishers with whom we work.

Hear Luminaries Interviewed at the 2006 Fall CNI Task Force Meeting

Matt Pasiewicz and CNI have made available digital audio interviews with a number of prominent attendees at the 2006 Fall CNI Task Force Meeting. Selected interviews are below. More are available on Pasiewicz’s blog.