Archive for the 'Open Access' Category

2006 PACS Review Use Statistics

Posted in Announcements, E-Journals, Open Access, Scholarly Communication on January 21st, 2007 by Charles W. Bailey, Jr.

The Public-Access Computer Systems Review (PACS Review) was a freely available e-journal, which I founded in 1989. It allowed authors to retain their copyrights, and it had a liberal copyright policy for noncommercial use. It’s last issue was published in 1998.

In 2006, there were 763,228 successful requests for PACS Review files, 2,091 average successful requests per day, 751,264 successful requests for pages, and 2,058 average successful requests for pages per day. (A request is for any type of file; a page request is for a content file, such as an HTML, PDF, or Word file). These requests came from 41,865 distinct host computers.

The requests came from 134 Internet domains. Leaving aside requests from unresolved numerical addresses, the top 15 domains were: .com (Commercial), .net (Networks), .edu (USA Higher Education), .cz (Czech Republic), .jp (Japan), .ca (Canada), .uk (United Kingdom), .au (Australia), .de (Germany), .nl (Netherlands), .org (Non Profit Making Organizations), .in (India), .my (Malaysia), .it (Italy), and .mx (Mexico). At the bottom were domains such as .ms (Montserrat), .fm (Micronesia), .nu (Niue), .ad (Andorra), and .az (Azerbaijan).

Rounded to the nearest thousand, there had previously been 3.5 million successful requests for PACS Review files.

This is the last time that use statistics will be reported for the PACS Review.

Share

Fedora 2.2 Released

Posted in Fedora, Institutional Repositories, Open Access, Open Source Software, Scholarly Communication on January 20th, 2007 by Charles W. Bailey, Jr.

The Fedora Project has released version 2.2 of Fedora.

From the announcement:

This is a significant release of Fedora that includes a complete repackaging of the Fedora source and binary distribution so that Fedora can now be installed as a standalone web application (.war) in any web container. This is a first step in positioning Fedora to fit within a standard "enterprise system" environment. A new installer application makes it easy to setup and run Fedora. Fedora now uses Servlet Filters for authentication. To support digital object integrity, the Fedora repository can now be configured to calculate and store checksums for datastream content. This can be done globally, or on selected datastreams. The Fedora API also provides the ability to check content integrity based on checksums. The RDF-based Resource Index has been tuned for better performance. Also, a new high-performing triplestore, backed by Postgres, has been developed that can be plugged into the Resource Index. Fedora contains many other enhancements and bug fixes.

Share

ScientificCommons.org: Access to Over 13 Million Digital Documents

Posted in E-Prints, OAI-PMH, Open Access, Scholarly Communication on January 19th, 2007 by Charles W. Bailey, Jr.

ScientificCommons.org is an initiative of the Institute for Media and Communications Management at the University of St. Gallen. It indexes both metadata and full-text from global digital repositories. It uses OAI-PMH to identify relevant documents. The full-text documents are in PDF, PowerPoint, RTF, Microsoft Word, and Postscript formats. After being retrieved from their original repository, the documents are cached locally at ScientificCommons.org. It has indexed about 13 million documents from over 800 repositories.

Here are some additional features from the About ScientificCommons.org page:

Identification of authors across institutions and archives: ScientificCommons.org identifies authors and assigns them their scientific publications across various archives. Additionally the social relations between the authors will be extracted and displayed. . . .

Semantic combination of scientific information: ScientificCommons.org structures and combines the scientific data to knowledge areas with Ontology’s. Lexical and statistical methods are used to identify, extract and analyze keywords. Based on this processes ScientificCommons.org classifies the scientific data and uses it e.g. for navigational and weighting purposes.

Personalization services: ScientificCommons.org offers the researchers the possibilities to inform themselves about new publications via our RSS Feed service. They can customize the RSS Feed to a special discipline or even to personalized list of keywords. Furthermore ScientificCommons.org will provide an upload service. Every researcher can upload his publication directly to ScientificCommons.org and assign already existing publications at ScientificCommons.org to his own researcher profile.

Share

DLF/NSDL OAI Best Practices Wiki

Posted in Metadata, OAI-PMH, Open Access on January 17th, 2007 by Charles W. Bailey, Jr.

The Digital Library Federation and NSDL OAI and Shareable Metadata Best Practices Working Group’s OAI Best Practices Wiki has a number of resources relevant to the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) and related metadata issues.

The Tools and Strategies for Using and Enhancing/Extending the OAI Protocol section is of particular interest. It includes information about OAI-PMH data provider and service provider registries, software solutions and packages, and static repositories and gateways; metadata management and added value tools as well as OAI and character validation tools; and using SRU/W, collection description schema, and NSDL safe transforms.

Share

Notre Dame Institutional Digital Repository Phase I Final Report

Posted in DSpace, E-Prints, Electronic Theses and Dissertations (ETDs), Institutional Repositories, Open Access, Scholarly Communication on January 16th, 2007 by Charles W. Bailey, Jr.

The University of Notre Dame Libraries have issued a report about their year-long institutional repository pilot project. There is an abbreviated HTML version and a complete PDF version.

From the Executive Summary:

Here is the briefest of summaries regarding what we did, what we learned, and where we think future directions should go:

  1. What we did—In a nutshell we established relationships with a number of content groups across campus: the Kellogg Institute, the Institute for Latino Studies, Art History, Electrical Engineering, Computer Science, Life Science, the Nanovic Institute, the Kaneb Center, the School of Architecture, FTT (Film, Television, and Theater), the Gigot Center for Entrepreneurial Studies, the Institute for Scholarship in the Liberal Arts, the Graduate School, the University Intellectual Property Committee, the Provost’s Office, and General Counsel. Next, we collected content from many of these groups, "cataloged" it, and saved it into three different computer systems: DigiTool, ETD-db, and DSpace. Finally, we aggregated this content into a centralized cache to provide enhanced browsing, searching, and syndication services against the content.
  2. What we learned—We essentially learned four things: 1) metadata matters, 2) preservation now, not later, 3) the IDR requires dedicated people with specific skills, 4) copyright raises the largest number of questions regarding the fulfillment of the goals of the IDR.
  3. Where we are leaning in regards to recommendations—The recommendations take the form of a "Chinese menu" of options, and the options are be grouped into "meals." We recommend the IDR continue and include: 1) continuing to do the Electronic Theses & Dissertations, 2) writing and implementing metadata and preservation policies and procedures, 3) taking the Excellent Undergraduate Research to the next level, and 4) continuing to implement DigiTool. There are quite a number of other options, but they may be deemed too expensive to implement.
Share

digitalculturebooks

Posted in Digital Presses, Open Access, Publishing, Scholarly Communication on January 12th, 2007 by Charles W. Bailey, Jr.

The University of Michigan Press and the Scholarly Publishing Office of the University of Michigan Library, working together as the Michigan Digital Publishing Initiative, have established digitalculturebooks, which offers free access to digital versions of its published works (print works are fee-based). The imprint focuses on "the social, cultural, and political impact of new media."

The objectives of the imprint are to:

  • develop an open and participatory publishing model that adheres to the highest scholarly standards of review and documentation;
  • study the economics of Open Access publishing;
  • collect data about how reading habits and preferences vary across communities and genres;
  • build community around our content by fostering new modes of collaboration in which the traditional relationship between reader and writer breaks down in creative and productive ways.

Library Journal Academic Newswire notes in its article about digitalculturebooks:

While press officials use the term "open access," the venture is actually more "free access" than open at this stage. Open access typically does not require permission for reuse, only a proper attribution. UM director Phil Pochoda told the LJ Academic Newswire that, while no final decision has been made, the press’s "inclination is to ask authors to request the most restrictive Creative Commons license" for their projects. That license, he noted, requires attribution and would not permit commercial use, such as using it in a subsequent for-sale product, without permission. The Digital Culture Books web site currently reads that "permission must be received for any subsequent distribution."

The imprint’s first publication is The Best of Technology Writing 2006.

(Prior postings about digital presses.)

Share

Has Authorama.com "Set Free" 100 Public Domain Books from Google Book Search?

Posted in Copyright, E-Books, Open Access, Publishing, Scholarly Communication on January 10th, 2007 by Charles W. Bailey, Jr.

In a posting on Google Blogoscoped, Philipp Lenssen has announced that he has put up 100 public domain books from Google Book Search on Authorama.

Regarding his action, Lenssen says:

In other words, Google imposes restrictions on these books which the public domain does not impose*. I’m no lawyer, and maybe Google can print whatever guidelines they want onto those books. . . and being no lawyer, most people won’t know if the guidelines are a polite request, or legally enforceable terms**. But as a proof of concept—the concept of the public domain—I’ve now ‘set free’ 100 books I downloaded from Google Book Search by republishing them on my public domain books site, Authorama. I’m not doing this out of disrespect for the Google Books program (which I think is cool, and I’ll credit Google on Authorama) but out of respect for the public domain (which I think is even cooler).

Since Lenssen has retained Google’s usage guidelines in the e-books, it’s unclear how they have been "set free," in spite of the following statement on Authorama’s Books from Google Book Search page:

The following books were downloaded from Google Book Search and are made available here as public domain. You can download, republish, mix and mash these books, for private or public, commercial or non-commercial use.

Leaving aside the above statement, Lenssen’s action appears to violate the following Google usage guideline, where Google asks that users:

Make non-commercial use of the files We designed Google Book Search for use by individuals, and we request that you use these files for personal, non-commercial purposes.

However, in the above guideline, Google uses the word "request," which suggests voluntary, rather than mandatory, compliance. Google also requests attribution and watermark retention.

Maintain attribution The Google ‘watermark’ you see on each file is essential for informing people about this project and helping them find additional materials through Google Book Search. Please do not remove it.

Note the use of the word "please."

It’s not clear how to determine if Google’s watermark remains in the Authorama files, but, given the retention of the usage guidelines, it likely does.

So, do Google’s public domain books really need to be "set free"? In its usage guidelines, Google appears to make compliance requests, not compliance requirements. Are such requests binding or not? If so, the language could be clearer. For example, here’s a possible rewording:

Make non-commercial use of the files Google Book Search is for individual use only, and its files can only be used for personal, non-commercial purposes. All other use is prohibited.

Share

Is OAI-PMH Too Labor-Intensive?

Posted in Metadata, OAI-PMH, Open Access on January 9th, 2007 by Charles W. Bailey, Jr.

OAI-PMH permits metadata harvesting from disciplinary archives, institutional repositories, and other digital archives. This allows the creation of specialized search services using this harvested metadata. OAI-PMH is a key technology for the open access movement, but does it require too much human intervention?

An interesting message on JISC-REPOSITORIES by Santy Chumbe, Technical Officer of the PerX project, suggests that it may. He says:

We have learned that in despite of its relative simplicity, an OAI-PMH service can be harder to implement and maintain than expected. We have spent a lot of effort harvesting, normalising and maintaining metadata obtained from OAI data providers. In particular the issue of metadata quality is an important factor here. A summary of our experiences dealing with OAI-PMH can be found at http://eprints.rclis.org/archive/00006394. . . . A final report outlining the maintenance issues involved in the project is in progress but the experience gained suggests that successful ongoing maintenance of OAI targets would require a mixture of automated and manual approaches and that the level of ongoing maintenance is high.

Share

Will Self-Archiving Cause Libraries to Cancel Journal Subscriptions?

Posted in E-Journals, E-Prints, Institutional Repositories, Libraries, Open Access, Publishing, Scholarly Communication on December 21st, 2006 by Charles W. Bailey, Jr.

There has been a great deal of discussion of late about the impact of self-archiving on library journal subscriptions. Obviously, this is of great interest to journal publishers who do not want to wake up one morning, rub the sleep from their eyes, and find out over their first cup of coffee at work that libraries have en masse canceled subscriptions because a "tipping point" has been reached. Likewise, open access advocates do not want journal publishers to panic at the prospect of cancellations and try to turn back the clock on liberal self-archiving policies. So, this is not a scenario that any one wants, except those who would like to simply scrap the existing journal publishing system and start over with a digital tabula rosa.

So, deep breath: Is the end near?

This question hinges on another: Will libraries accept any substitute for a journal that does not provide access to the full, edited, and peer-reviewed contents of that journal?

If the answer is "yes," publishers better get out their survival kits and hunker down for the digital nuclear winter or else change business practices to embrace the new reality. Attempts to fight back by rolling back the clock may just make the situation worse: the genie is out of the bottle.

If the answer is "no," preprints pose no threat, but postprints may under some difficult to attain circumstances.

It is unlikely that a critical mass of author created postprints (i.e., author makes the preprint look like the postprint) will ever emerge. Authors would have to be extremely motivated to have this occur. If you don’t believe me, take a Word file that you submitted to a publisher and make it look exactly like the published article (don’t forget the pagination because that might be a sticking point for libraries). That leaves publisher postprints (generally PDF files).

For the worst to happen, every author of every paper published in a journal would have to self-archive the final publisher PDF file (or the publishers themselves would have to do it for the authors under mandates).

But would that be enough? Wouldn’t the permanence and stability of the digital repositories housing these postprints be of significant concern to libraries? If such repositories could not be trusted, then libraries would have to attempt to archive the postprints in question themselves; however, since postprints are not by default under copyright terms that would allow this to happen (e.g., they are not under Creative Commons Licenses), libraries may be barred from doing so. There are other issues as well: journal and issue browsing capabilities, the value-added services of indexing and abstracting services, and so on. For now, let’s wave our hands briskly and say that these are all tractable issues.

If the above problems were overcome, a significant one remains: publishers add value in many ways to scholarly articles. Would libraries let the existing system of journal publishing collapse because of self-archiving without a viable substitute for these value-added functions being in place?

There have been proposals for and experiments with overlay journals for some time, as well other ideas for new quality control strategies, but, to date, none have caught fire. Old-fashioned peer review, copy editing and fact checking, and publisher-based journal design and production still reign, even among the vast majority of e-journals that are not published by conventional publishers. In the Internet age, nothing technological stops tens of thousands of new e-journals using open source journal management software from blooming, but they haven’t so far, have they? Rather, if you use a liberal definition of open access, there are about 2,500 OA journals—a significant achievement; however, there are questions about the longevity of such journals if they are published by small non-conventional publishers such as groups of scholars (e.g., see "Free Electronic Refereed Journals: Getting Past the Arc of Enthusiasm"). Let’s face it—producing a journal is a lot of work, even a small journal that only publishes less than a hundred papers a year.

Bottom line: a perfect storm is not impossible, but it is unlikely.

Share

Journal 2.0: PLoS ONE Beta Goes Live

Posted in E-Journals, Open Access, Publishing, Scholarly Communication on December 21st, 2006 by Charles W. Bailey, Jr.

The Public Library of Science has released a beta version of its innovative PLoS ONE journal.

Why innovative? First, it’s a multidisciplinary scientific journal, with published articles covering subjects that range from Biochemistry to Virology. Second, it’s a participative journal that allows registered users to annotate and initiate discussions about articles. Open commentary and peer-review have been previously implemented in some e-journals (e.g, see JIME: An Interactive Journal for Interactive Media), but PLoS ONE is the most visible of these efforts and, given PLoS’s reputation for excellence, it lends credibility to a concept that has yet to catch fire in the journal publishing world. A nice feature is the “Most Annotated” tab on the home page that highlights articles that have garnered reader commentary. Third, it’s an open access journal in the full sense of the term, with all articles under the least restrictive Creative Commons license, the Creative Commons Attribution License.

The beta site is a bit slow, probably due to significant interest, so expect some initial browsing delays.

Congratulations to PLoS on PLoS ONE. It’s journal worth keeping an eye on.

Share

Certifying Digital Repositories: DINI Draft

Posted in Disciplinary Archives, Institutional Repositories, Open Access on December 20th, 2006 by Charles W. Bailey, Jr.

The Electronic Publishing Working Group of the Deutsche Initiative für Netzwerkinformation (DINI) has released an English draft of its DINI-Certificate Document and Publication Services 2007.

It outlines criteria for repository author support; indexing; legal aspects; long-term availability; logs and statistics; policies; security, authenticity and data integrity; and service visibility. It also provides examples.

Share

Test Driving the CrossRef Simple-Text Query Tool for Finding DOIs

Posted in Metadata, Open Access on December 20th, 2006 by Charles W. Bailey, Jr.

CrossRef has made a DOI finding tool publicly available. It’s called Simple-Text Query. You can get the details at Barbara Quint’s article "Linking Up Bibliographies: DOI Harvesting Tool Launched by CrossRef."

What caught my eye in Quint’s article was this: "Users can enter whole bibliographies with citations in almost any bibliographic format and receive back the matching Digital Object Identifiers (DOIs) for these references to insert into their final bibliographies."

Well not exactly. I cut and pasted just the "9 Repositories, E-Prints, and OAI" section of the Scholarly Electronic Publishing Bibliography into Simple-Text Query. Result: error message. I had exceeded the 15,360 character limit. So, suggestion one: put the limit on the Simple-Text Query page.

So them I counted out 15,360 characters of the section and pasted that. Just kidding. I pasted the first six references. Result?

Alexander, Martha Latika, and J. N. Gautam. “Institutional Repositories for Scholarly Communication: Indian Initiatives.” Serials: The Journal for the Serials Community 19, no. 3 (2006): 195-201.
No doi match found.

Allard, Suzie, Thura R. Mack, and Melanie Feltner-Reichert. “The Librarian’s Role in Institutional Repositories: A Content Analysis of the Literature.” Reference Services Review 33, no. 3 (2005): 325-336.
doi:10.1108/00907320510611357
http://dx.doi.org/10.1108/00907320510611357

Allen, James. “Interdisciplinary Differences in Attitudes towards Deposit in Institutional Repositories.” Manchester Metropolitan University, 2005.
http://eprints.rclis.org/archive/00005180/
Reference not parsed

Allinson, Julie, and Roddy MacLeod. “Building an Information Infrastructure in the UK.” Research Information (October/November 2006).
http://www.researchinformation.info/rioctnov06digital.html
Reference not parsed

Anderson, Greg, Rebecca Lasher, and Vicky Reich. “The Computer Science Technical Report (CS-TR) Project: A Pioneering Digital Library Project Viewed from a Library Perspective.” The Public-Access Computer Systems Review 7, no. 2 (1996): 6-26.
http://epress.lib.uh.edu/pr/v7/n2/ande7n2.html
Reference not parsed

Andreoni, Antonella, Maria Bruna Baldacci, Stefania Biagioni, Carlo Carlesi, Donatella Castelli, Pasquale Pagano, Carol Peters, and Serena Pisani. “The ERCIM Technical Reference Digital Library: Meeting the Requirements of a European Community within an International Federation.” D-Lib Magazine 5 (December 1999).
http://www.dlib.org/dlib/december99/peters/12peters.html
Reference not parsed

Hmmm. According to Quint’s article:

I asked Brand if CrossRef could reach open access material. She assured me it could, but it clearly did not give the free and sometimes underdefined material any preference.

Looks like the open access capabilities may need some fine tuning. D-Lib Magazine and The Public-Access Computer Systems Review are not exactly obscure e-journals. Since my references are formatted in the Chicago style by EndNote, I don’t think that the reference format is the issue. In fact, Quint’s article says: "The Simple-Text Query can retrieve DOIs for journal articles, books, and chapters in any reference citation style, although it works best with standard styles."

Conclusion: I play with it some more, but Simple-Text Query may be best for conventional, mainstream journal references.

Share

Page 113 of 120« First...102030...111112113114115...120...Last »

DigitalKoans

DigitalKoans

Digital Scholarship

Copyright © 2005-2017 by Charles W. Bailey, Jr.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International license.