Notre Dame Institutional Digital Repository Phase I Final Report

The University of Notre Dame Libraries have issued a report about their year-long institutional repository pilot project. There is an abbreviated HTML version and a complete PDF version.

From the Executive Summary:

Here is the briefest of summaries regarding what we did, what we learned, and where we think future directions should go:

  1. What we did—In a nutshell we established relationships with a number of content groups across campus: the Kellogg Institute, the Institute for Latino Studies, Art History, Electrical Engineering, Computer Science, Life Science, the Nanovic Institute, the Kaneb Center, the School of Architecture, FTT (Film, Television, and Theater), the Gigot Center for Entrepreneurial Studies, the Institute for Scholarship in the Liberal Arts, the Graduate School, the University Intellectual Property Committee, the Provost’s Office, and General Counsel. Next, we collected content from many of these groups, "cataloged" it, and saved it into three different computer systems: DigiTool, ETD-db, and DSpace. Finally, we aggregated this content into a centralized cache to provide enhanced browsing, searching, and syndication services against the content.
  2. What we learned—We essentially learned four things: 1) metadata matters, 2) preservation now, not later, 3) the IDR requires dedicated people with specific skills, 4) copyright raises the largest number of questions regarding the fulfillment of the goals of the IDR.
  3. Where we are leaning in regards to recommendations—The recommendations take the form of a "Chinese menu" of options, and the options are be grouped into "meals." We recommend the IDR continue and include: 1) continuing to do the Electronic Theses & Dissertations, 2) writing and implementing metadata and preservation policies and procedures, 3) taking the Excellent Undergraduate Research to the next level, and 4) continuing to implement DigiTool. There are quite a number of other options, but they may be deemed too expensive to implement.

digitalculturebooks

The University of Michigan Press and the Scholarly Publishing Office of the University of Michigan Library, working together as the Michigan Digital Publishing Initiative, have established digitalculturebooks, which offers free access to digital versions of its published works (print works are fee-based). The imprint focuses on "the social, cultural, and political impact of new media."

The objectives of the imprint are to:

  • develop an open and participatory publishing model that adheres to the highest scholarly standards of review and documentation;
  • study the economics of Open Access publishing;
  • collect data about how reading habits and preferences vary across communities and genres;
  • build community around our content by fostering new modes of collaboration in which the traditional relationship between reader and writer breaks down in creative and productive ways.

Library Journal Academic Newswire notes in its article about digitalculturebooks:

While press officials use the term "open access," the venture is actually more "free access" than open at this stage. Open access typically does not require permission for reuse, only a proper attribution. UM director Phil Pochoda told the LJ Academic Newswire that, while no final decision has been made, the press’s "inclination is to ask authors to request the most restrictive Creative Commons license" for their projects. That license, he noted, requires attribution and would not permit commercial use, such as using it in a subsequent for-sale product, without permission. The Digital Culture Books web site currently reads that "permission must be received for any subsequent distribution."

The imprint’s first publication is The Best of Technology Writing 2006.

(Prior postings about digital presses.)

Has Authorama.com "Set Free" 100 Public Domain Books from Google Book Search?

In a posting on Google Blogoscoped, Philipp Lenssen has announced that he has put up 100 public domain books from Google Book Search on Authorama.

Regarding his action, Lenssen says:

In other words, Google imposes restrictions on these books which the public domain does not impose*. I’m no lawyer, and maybe Google can print whatever guidelines they want onto those books. . . and being no lawyer, most people won’t know if the guidelines are a polite request, or legally enforceable terms**. But as a proof of concept—the concept of the public domain—I’ve now ‘set free’ 100 books I downloaded from Google Book Search by republishing them on my public domain books site, Authorama. I’m not doing this out of disrespect for the Google Books program (which I think is cool, and I’ll credit Google on Authorama) but out of respect for the public domain (which I think is even cooler).

Since Lenssen has retained Google’s usage guidelines in the e-books, it’s unclear how they have been "set free," in spite of the following statement on Authorama’s Books from Google Book Search page:

The following books were downloaded from Google Book Search and are made available here as public domain. You can download, republish, mix and mash these books, for private or public, commercial or non-commercial use.

Leaving aside the above statement, Lenssen’s action appears to violate the following Google usage guideline, where Google asks that users:

Make non-commercial use of the files We designed Google Book Search for use by individuals, and we request that you use these files for personal, non-commercial purposes.

However, in the above guideline, Google uses the word "request," which suggests voluntary, rather than mandatory, compliance. Google also requests attribution and watermark retention.

Maintain attribution The Google ‘watermark’ you see on each file is essential for informing people about this project and helping them find additional materials through Google Book Search. Please do not remove it.

Note the use of the word "please."

It’s not clear how to determine if Google’s watermark remains in the Authorama files, but, given the retention of the usage guidelines, it likely does.

So, do Google’s public domain books really need to be "set free"? In its usage guidelines, Google appears to make compliance requests, not compliance requirements. Are such requests binding or not? If so, the language could be clearer. For example, here’s a possible rewording:

Make non-commercial use of the files Google Book Search is for individual use only, and its files can only be used for personal, non-commercial purposes. All other use is prohibited.

Will Self-Archiving Cause Libraries to Cancel Journal Subscriptions?

There has been a great deal of discussion of late about the impact of self-archiving on library journal subscriptions. Obviously, this is of great interest to journal publishers who do not want to wake up one morning, rub the sleep from their eyes, and find out over their first cup of coffee at work that libraries have en masse canceled subscriptions because a "tipping point" has been reached. Likewise, open access advocates do not want journal publishers to panic at the prospect of cancellations and try to turn back the clock on liberal self-archiving policies. So, this is not a scenario that any one wants, except those who would like to simply scrap the existing journal publishing system and start over with a digital tabula rosa.

So, deep breath: Is the end near?

This question hinges on another: Will libraries accept any substitute for a journal that does not provide access to the full, edited, and peer-reviewed contents of that journal?

If the answer is "yes," publishers better get out their survival kits and hunker down for the digital nuclear winter or else change business practices to embrace the new reality. Attempts to fight back by rolling back the clock may just make the situation worse: the genie is out of the bottle.

If the answer is "no," preprints pose no threat, but postprints may under some difficult to attain circumstances.

It is unlikely that a critical mass of author created postprints (i.e., author makes the preprint look like the postprint) will ever emerge. Authors would have to be extremely motivated to have this occur. If you don’t believe me, take a Word file that you submitted to a publisher and make it look exactly like the published article (don’t forget the pagination because that might be a sticking point for libraries). That leaves publisher postprints (generally PDF files).

For the worst to happen, every author of every paper published in a journal would have to self-archive the final publisher PDF file (or the publishers themselves would have to do it for the authors under mandates).

But would that be enough? Wouldn’t the permanence and stability of the digital repositories housing these postprints be of significant concern to libraries? If such repositories could not be trusted, then libraries would have to attempt to archive the postprints in question themselves; however, since postprints are not by default under copyright terms that would allow this to happen (e.g., they are not under Creative Commons Licenses), libraries may be barred from doing so. There are other issues as well: journal and issue browsing capabilities, the value-added services of indexing and abstracting services, and so on. For now, let’s wave our hands briskly and say that these are all tractable issues.

If the above problems were overcome, a significant one remains: publishers add value in many ways to scholarly articles. Would libraries let the existing system of journal publishing collapse because of self-archiving without a viable substitute for these value-added functions being in place?

There have been proposals for and experiments with overlay journals for some time, as well other ideas for new quality control strategies, but, to date, none have caught fire. Old-fashioned peer review, copy editing and fact checking, and publisher-based journal design and production still reign, even among the vast majority of e-journals that are not published by conventional publishers. In the Internet age, nothing technological stops tens of thousands of new e-journals using open source journal management software from blooming, but they haven’t so far, have they? Rather, if you use a liberal definition of open access, there are about 2,500 OA journals—a significant achievement; however, there are questions about the longevity of such journals if they are published by small non-conventional publishers such as groups of scholars (e.g., see "Free Electronic Refereed Journals: Getting Past the Arc of Enthusiasm"). Let’s face it—producing a journal is a lot of work, even a small journal that only publishes less than a hundred papers a year.

Bottom line: a perfect storm is not impossible, but it is unlikely.

Journal 2.0: PLoS ONE Beta Goes Live

The Public Library of Science has released a beta version of its innovative PLoS ONE journal.

Why innovative? First, it’s a multidisciplinary scientific journal, with published articles covering subjects that range from Biochemistry to Virology. Second, it’s a participative journal that allows registered users to annotate and initiate discussions about articles. Open commentary and peer-review have been previously implemented in some e-journals (e.g, see JIME: An Interactive Journal for Interactive Media), but PLoS ONE is the most visible of these efforts and, given PLoS’s reputation for excellence, it lends credibility to a concept that has yet to catch fire in the journal publishing world. A nice feature is the “Most Annotated” tab on the home page that highlights articles that have garnered reader commentary. Third, it’s an open access journal in the full sense of the term, with all articles under the least restrictive Creative Commons license, the Creative Commons Attribution License.

The beta site is a bit slow, probably due to significant interest, so expect some initial browsing delays.

Congratulations to PLoS on PLoS ONE. It’s journal worth keeping an eye on.

Certifying Digital Repositories: DINI Draft

The Electronic Publishing Working Group of the Deutsche Initiative für Netzwerkinformation (DINI) has released an English draft of its DINI-Certificate Document and Publication Services 2007.

It outlines criteria for repository author support; indexing; legal aspects; long-term availability; logs and statistics; policies; security, authenticity and data integrity; and service visibility. It also provides examples.

STARGATE Final Report and Tools

The STARGATE project has issued its final report. Here’s a brief summary of the project from the Executive Summary:

STARGATE (Static Repository Gateway and Toolkit) was funded by the Joint Information Systems Committee (JISC) and is intended to demonstrate the ease of use of the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) Static Repository technology, and the potential benefits offered to publishers in making their metadata available in this way This technology offers a simpler method of participating in many information discovery services than creating fully-fledged OAI-compliant repositories. It does this by allowing the infrastructure and technical support required to participate in OAI-based services to be shifted from the data provider (the journal) to a third party and allows a single third party gateway provider to provide intermediation for many data providers (journals).

To support the its work, the project developed tools and supporting documentation, which can be found below:

Details on Open Repositories 2007 Talks

Details about the Open Repositories 2007 conference sessions are now available, including keynotes, poster sessions, presentations, and user groups. For DSpace, EPrints, and Fedora techies, the user group sessions look like a don’t miss with talks by luminaries such as John Ockerbloom and MacKenzie Smith. The presentations sessions include talks by Andrew Treloar, Carl Lagoze and Herbert Van de Sompel, Leslie Johnston, Simeon Warner among other notables. Open Repositories 2007 will be held in San Antonio, January 23-26.

Hopefully, the conference organizers plan to make streaming audio and/or video files available post-conference, but PowerPoints, as was the case for Open Repositories 2006, would also be useful.

Under the Hood of PLoS ONE: The Open Source TOPAZ E-Publishing System

PLoS is building its innovative PLoS ONE e-journal, which will incorporate both traditional and open peer review, using the open source TOPAZ software. (For a detailed description of the PLoS ONE peer review process, check out "ONE for All: The Next Step for PLoS.")

What is TOPAZ? It’s Web site doesn’t provide specifics, but "PLoS ONE—Technical Background" by Richard Cave does:

The core of TOPAZ is a digital information repository called Fedora (Flexible Extensible Digital Object Repository Architecture). Fedora is an Open Source content management application that supports the creation and management of digital objects. The digital objects contain metadata to express internal and external relationships in the repository, like articles in a journal or the text, images and video of an article. This relationship metadata can also be search using a semantic web query languages. Fedora is jointly developed by Cornell University’s computer science department and the University of Virginia Libraries.

The metastore Kowari will be used with Fedora to support Resource Description Framework (RDF) http://en.wikipedia.org/wiki/Resource_Description_Framework metadata within the repository.

The PLoS ONE web interface will be built with AJAX. Client-side APIs will create the community features (e.g. annotations, discussion threads, ratings, etc.) for the website. As more new features are available on the TOPAZ architecture, we will launch them on PLoS ONE.

There was a TOPAZ Wiki at PLoS. It’s gone, but it’s pages are still cached by Google. The Wiki suggests that TOPAZ is likely to support Atom/RSS feeds, full-text search, and OAI-PMH among other possible features.

For information about other open source e-journal publishing systems, see "Open Source Software for Publishing E-Journals."

Results from the DSpace Community Survey

DSpace conducted an informal survey of its open source community in October 2006. Here are some highlights:

  • The vast majority of respondents (77.6%) used or planned to use DSpace for a university IR.
  • The majority of systems were in production (53.4%); pilot testing was second (35.3%).
  • Preservation and interoperability were the highest priority system features (61.2% each), followed by search engine indexing (57.8%) and open access to refereed articles (56.9%). (Percentage of respondents who rated these features "very important.") Only 5.2% thought that OA to refereed articles was unimportant.
  • The most common type of current IR content was refereed scholarly articles and theses/dissertations (55.2% each), followed by other (48.6%) and grey literature (47.4%).
  • The most popular types of content that respondents were planning to add to their IRs were datasets (53.4%), followed by audio and video (46.6% each).
  • The most frequently used type of metadata was customized Dublin Core (80.2%), followed by XML metadata (13.8%).
  • The most common update pattern was to regularly migrate to new versions; however it took a "long time to merge in my customizations/configuration" (44.8%).
  • The most common types of modification were minor cosmetics (34.5%), new features (26.7%), and significant user interface customization (21.6%).
  • Only 30.2% were totally comfortable with editing/customizing DSpace; 56.9% were somewhat comfortable and 12.9% were not comfortable.
  • Plug-in use is light: for example, 11.2% use SRW/U, 8.6% use Manakin, and 5.2% use TAPIR (ETDs).
  • The most desired feature for the next version is a more easily customized user interface (17.5%), closely followed by improved modularity (16.7%).

For information about other recent institutional repository surveys, see "ARL Institutional Repositories SPEC Kit" and "MIRACLE Project’s Institutional Repository Survey."

QuickTime Videos and PowerPoints from the Transforming Scholarly Communication Symposium

When I was chairing the Scholarly Communications Public Relations Task Force at the UH Libraries, the task force initiated a series of projects to increase awareness of key issues on the UH campus under the name "Transforming Scholarly Communication": a Website, a Weblog, and a symposium.

I’m pleased to announce that both the PowerPoint presentations and the QuickTime videos of the symposium speeches are now available. Thanks again to our speaker panel for participating in this event.

Ray English, Director of Libraries at Oberlin College and Chair of the SPARC Steering Committee, kicked things off with a talk on "The Crisis in Scholarly Communication" (PowerPoint, QuickTime Video, and "Sites and Cites for the Struggle: A Selective Scholarly Communication Bibliography").

Next, Corynne McSherry, Staff Attorney at the Electronic Frontier Foundation and author of Who Owns Academic Work?: Battling for Control of Intellectual Property, spoke on "Copyright in Cyberspace: Defending Fair Use" (PowerPoint and QuickTime Video).

Finally, Peter Suber, Research Professor of Philosophy at Earlham College, Senior Researcher at the Scholarly Publishing and Academic Resources Coalition (SPARC), and the Open Access Project Director at Public Knowledge, discussed "What Is Open Access?" (PowerPoint and QuickTime Video).

New OA Google Custom Search Engines

I’ve enhanced Open Access Update with four new Google Custom Search Engines:

  1. Open Access Mailing Lists (these are lists that have general discussion of OA topics)
  2. Open Access Serials
  3. Open Access Weblogs
  4. Open Access Wikis

The indexed works contain significant information about open access topics and are freely available.

See Open Access Update for details about the included works.

Open Access Bibliography Now Searchable

The Open Access Bibliography: Liberating Scholarly Literature with E-Prints and Open Access Journals is now searchable using a Google Custom Search Engine. The new search box is just before the table of contents in the bibliography’s home page. Only the bibliography sections of the document are searchable (e.g., the "Key Open Access Concepts" section is excluded).

Keep in mind when you search that you will retrieve bibliography section file titles with a single representative search result shown from that section. To see all hits in a section, click on the cached page, which shows the retrieved search term(s) in the section highlighted in yellow.

Rice University Press Publishes Its First Open Access Digital Document

The recently re-established Rice University Press, which was reborn as a digital press, has published its first e-report: Art History and Its Publications in the Electronic Age by Hilary Ballon (Professor and Director of Art Humanities at the Columbia University Department of Art History and Archaeology) and Mariet Westermann (Director and Professor at the Institute of Fine Arts, New York University).

The introduction notes:

Just as we were finishing our report, Rice University Press announced that it would re-launch itself as a fully electronic press with a special commitment to art history. We were delighted to find Rice willing to partner with the Council on Library and Information Resources (CLIR) to publish our report electronically, with the kinds of hyper-linking, response capability, and print-on-demand options we consider vital to the success of scholarly publication on line. At Rice University Press, Chuck Henry, Chuck Bearden, and Kathi Fletcher generously steered us through the technological and legal process. We received enthusiastic support at CLIR from Susan Perry, Michael Ann Holly, Kathlin Smith, and Ann Okerson.

Like all digital works to be published by the press, this one is under a Creative Commons Attribution 2.0 license. At this time, it does not appear that a print-on-demand version of the work is available from Rice University Press.

OAI’s Object Reuse and Exchange Initiative

The Open Archives Initiative has announced its Object Reuse and Exchange (ORE) initiative:

Object Reuse and Exchange (ORE) will develop specifications that allow distributed repositories to exchange information about their constituent digital objects. These specifications will include approaches for representing digital objects and repository services that facilitate access and ingest of these representations. The specifications will enable a new generation of cross-repository services that leverage the intrinsic value of digital objects beyond the borders of hosting repositories. . . . its real importance lies in the potential for these distributed repositories and their contained objects to act as the foundation of a new digitally-based scholarly communication framework. Such a framework would permit fluid reuse, refactoring, and aggregation of scholarly digital objects and their constituent parts—including text, images, data, and software. This framework would include new forms of citation, allow the creation of virtual collections of objects regardless of their location, and facilitate new workflows that add value to scholarly objects by distributed registration, certification, peer review, and preservation services. Although scholarly communication is the motivating application, we imagine that the specifications developed by ORE may extend to other domains.

OAI-ORE is being funded my the Andrew W. Mellon Foundation for a two-year period.

Presentations from the Augmenting Interoperability across Scholarly Repositories meeting are a good source of further information about the thinking behind the initiative as is the "Pathways: Augmenting Interoperability across Scholarly Repositories" preprint.

The Ohio State University Press Open Access Initiative

The Ohio State University Press is providing free access to over 30 out-of-print books that it has published as part of its open access initiative. Chapters and other book sections are provided as PDF files. The books remain under traditional copyright statements.

Examples include:

Digital University/Library Presses, Part 11: Other Digital Presses

Here are brief descriptions of eleven more digital university/library presses, bringing the total number of presses covered by this series of postings to 21.

  1. Clemson University Digital Press: "The Clemson University Digital Press was established in 2000 to exist within the college of Architecture, Arts, and Humanities at Clemson. . . . The press generally publishes two books per annum, in addition to maintaining its flagship journals, the semiannual South Carolina Review, and the annual Shakespeare journal, The Upstart Crow." (See the publication list.)
  2. EPIC: "The Electronic Publishing Initiative at Columbia (EPIC) is a groundbreaking new initiative in digital publishing at Columbia University that involves Columbia University Press, the Libraries, and Academic Information Systems. Its mission is to create new kinds of scholarly and educational publications through the use of new media technologies in an integrated research and production environment. Working with the producers of intellectual property at Columbia University and other leading academic institutions, it aims to make these digital publications self-sustaining through subscription sales to institutions and individual users."
  3. eScholarship Repository: The eScholarship Repository publishes both journals and peer-reviewed series (see the publication list). eScholarship works in partnership with the University of California Press, which has an active digital publishing program. Notable efforts include eScholarship Editions, the University of California International and Area Studies Digital Collection, and University of California Publications.
  4. Digital Library and Archives, Virginia Tech University Libraries: "The Scholarly Communications Project (SCP) expanded its resources and services and merged with Special Collections to become the university’s Digital Library and Archives in July 2000. SCP began working with members of the university community in 1989 to help them create online resources such as electronic journals, and to use library services such as electronic reserve with its centralized access to online course materials." (See the journal list.)
  5. Praxis (e)Press: "Praxis (e)Press is an open access e-book publishing house located simultaneously at Okanagan University College, Vernon, and the University of Victoria, Victoria, British Columbia, Canada." (See the book list.)
  6. Project Euclid: "Project Euclid’s mission is to advance scholarly communication in the field of theoretical and applied mathematics and statistics. Project Euclid is designed to address the unique needs of low-cost independent and society journals. Through a collaborative partnership arrangement, these publishers join forces and participate in an online presence with advanced functionality, without sacrificing their intellectual or economic independence or commitment to low subscription prices. Full-text searching, reference linking, interoperability through the Open Archives Initiative, and long-term retention of data are all important components of the project." (See the journal list.)
  7. Project MUSE: "MUSE began in 1993 as a pioneering joint project of the Johns Hopkins University Press and the Milton S. Eisenhower Library at JHU. Grants from the Mellon Foundation and the National Endowment for the Humanities allowed MUSE to go live with JHU Press journals in 1995. Journals from other publishers were first incorporated in 2000 . . . .Today, MUSE is still a not-for-profit collaboration between the participating publishers and MSEL, with the goal of disseminating quality scholarship via a sustainable model that meets the needs of both libraries and publishers." (See the journal list.)
  8. Rice University Press: "Using the open-source e-publishing platform Connexions, Rice University Press is returning from a decade-long hiatus to explore models of peer-reviewed scholarship for the 21st century. The technology offers authors a way to use multimedia—audio files, live hyperlinks or moving images—to craft dynamic scholarly arguments, and to publish on-demand original works in fields of study that are increasingly constrained by print publishing."
  9. Scholarly Publishing Office, University of Michigan Library: "The office supports the traditional constructs of journal and monographic publication in an online environment, as well as publishing scholarly work expressly designed for electronic delivery. . . . It is currently developing a set of services in journal, monograph and multimedia publishing in two related, but distinct, ways. It provides cost-effective services to all members of the campus community, as resources and capacity allow. It also actively recruits scholarly journals, monographs and projects of exceptionally high quality. . . . Among these are projects that draw on the significant digital collections already available at the University Library." (See the publications and projects list.)
  10. Sydney University Press: "Sydney University Press was restarted in 2003 as a digital and print ‘on demand’ publisher. . . .SUP draws on the digital library collection of the University of Sydney Library’s Scholarly Text and Image Service (SETIS). . . .SUP provides the ability to purchase a print copy of selected texts to anyone, anywhere. SUP has partnered with the Copyright Agency Ltd (CAL) to bring out-of-print Australian novels back into circulation. . . . SUP publishes new work based on teaching and research from the University of Sydney and other Australian academic institutions." (See the book list.)
  11. The University of Texas Houston Electronic Press: "The U.T. Houston Electronic Press exists to advance knowledge in the health sciences by electronically disseminating the results of scholarly activities for the furtherance of education, research and service. It is an open access digital resource."

Prior postings on this topic:

Digital University/Library Presses, Part 10: Parallel Press

The University of Wisconsin-Madison Libraries’ Parallel Press publishes "print-on-demand books that parallel online publications, as well as chapbooks featuring the work of regional poets and UW historians." Many of the books are reprints of out-of-print works. It appears that the Parallel Press was established in 1998.

The relationship between the press and the University of Wisconsin-Madison Libraries’ digital collections is described as follows:

While managed independently of the Parallel Press, the UW-Madison Libraries’ digital collections are inexorably linked with the press’ print-on-demand publishing operations as the original source of all reprinted material. If enough interest is shown, nearly any of these online resources could be the basis of a future Parallel Press print publication.

While the chapbooks are only available in low-cost print editions, the books have a freely available digital version. Examples of books include (links are to the digital versions):

Prior postings on this topic:

MIRACLE Project’s Institutional Repository Survey

The MIRACLE (Making Institutional Repositories A Collaborative Learning Environment) project at the University of Michigan’s School of Information presented a paper at JCDL 2006 titled "Nationwide Census of Institutional Repositories: Preliminary Findings."

MIRACLE’s sample population was 2,147 library directors at four-year US colleges and universities. The paper presents preliminary findings from 273 respondents.

Respondents characterized their IR activities as: "(1) implementation of an IR (IMP), (2) planning & pilot testing an IR software package (PPT), (3) planning only (PO), or (4) no planning to date (NP)."

Of the 273 respondents, "28 (10%) have characterized their IR involvement as IMP, 42 (15%) as PPT, 65 (24%) as PO, and 138 (51%) as NP."

The top-ranked benefits of having an IR were: "capturing the intellectual capital of your institution," "better service to contributors," and "longtime preservation of your institution’s digital output." The bottom-ranked benefits were "reducing user dependence on your library’s print collection," "providing maximal access to the results of publicly funded research," and "an increase in citation counts to your institution’s intellectual output."

On the question of IR staffing, the survey found:

Generally, PPT and PO decision-makers envision the library sharing operational responsibility for an IR. Decision-makers from institutions with full-fledged operational IRs choose responses that show library staff bearing the burden of responsibility for the IR.

Of those with operational IRs who identified their IR software, the survey found that they were using: "(1) 9 for Dspace, (2) 5 for bePress, (3) 4 for ProQuest’s Digital Commons, (4) 2 for local solutions, and (5) 1 each for Ex Libris’ DigiTools and Virginia Tech’s ETD." Of those who were pilot testing software: "(1) 17 for DSpace, (2) 9 for OCLC’s ContentDM, (3) 5 for Fedora, (4) 3 each for bePress, DigiTool, ePrints, and Greenstone, (5) 2 each for Innovative Interfaces, Luna, and ETD, and (6) 1 each for Digital Commons, Encompass, a local solution, and Opus."

In terms of number of documents in the IRs, by far the largest percentages were for less than 501 documents (IMP, 41%; and PPT, 67%).

The preliminary results also cover other topics, such as content recruitment, investigative decision-making activities, IR costs, and IR system features.

It is interesting to see how these preliminary results compare to those of the ARL Institutional Repositories SPEC Kit. For example, when asked "What are the top three benefits you feel your IR provides?," the ARL survey respondents said:

  1. Enhance visibility and increase dissemination of institution’s scholarship: 68%
  2. Free, open, timely access to scholarship: 46%
  3. Preservation of and long-term access to institution’s scholarship: 36%
  4. Preservation and stewardship of digital content: 36%
  5. Collecting, organizing assets in a central location: 24%
  6. Educate faculty about copyright, open access, scholarly communication: 8%

Open Access Update Web Page: New Aggregate Feed

The Blogdigger feed was not updating properly, and it has been deleted.

I’ve created a MySyndicaat Feedbot feed to replace it. The aggregate feed provides recent postings for the current week for selected Weblogs and other sources (currently 14 sources). The Open Access Update page’s feed has been switched to the MySyndicaat feed and the number of possible postings increased to 50. The MySyndicaat Feedbot Web page is now available as well.

Although the MySyndicaat Feedbot is set to the shortest update cycle, keep in mind that there are bound to be some feed update delays.