In "Documenting the Decline of (Print) Law Reviews," Doug Lederman presents sobering subscription statistics for law reviews that show steep declines from 1979-80 to 2007-8.
In "The Once and Future E-Book: On Reading in the Digital Age," John Siracusa, Ars Technica staff member and veteran of the e-book company Peanut Press, discusses whether the e-book or the print book will win in the end.
James Grimmelmann, Associate Professor at New York Law School, has made available "How to Improve the Google Book Search Settlement" in the Berkeley Electronic Press' Selected Works.
Here's the abstract:
The proposed settlement in the Google Book Search case should be approved with strings attached. The project will be immensely good for society, and the proposed deal is a fair one for Google, for authors, and for publishers. The public interest demands, however, that the settlement be modified first. It creates two new entities—the Books Rights Registry Leviathan and the Google Book Search Behemoth—with dangerously concentrated power over the publishing industry. Left unchecked, they could trample on consumers in any number of ways. We the public have a right to demand that those entities be subject to healthy, pro-competitive oversight, and so we should.
A recent report by Simba Information, Global Professional Publishing 2008-2009, says that English-language STM (scientific, technical, and medical) publishing grew by 3.8% in 2008, a 1.5% drop from the previous year, to total about $16 billion dollars.
Read more about it at "STM Publishing Market Grew 3.8% as Recession Loomed."
JISC has released "SWORD: Cutting Through the Red Tape to Populate Learning Materials Repositories."
Here's the abstract:
This in-depth article by Sarah Currier, the Product Manager for Intrallect Ltd., introduces SWORD (Simple Web-service Offering Repository Deposit) to those interested in sharing, reuse, repurposing and management of teaching and learning materials. The article provides an overview of the tool, technical details of how SWORD works and four case study vignettes, or SWORD Stories, on work that is already under way, which illustrate how SWORD streamlines the process of depositing learning materials into repositories.
DigitalPreservationEurope has released two briefing papers: "Data Preservation, Reuse and (Open) Access in High-Energy Physics" and "Digital Preservation for Long-Term Environmental Monitoring."
The Georgetown University Library is recruiting a Digital Services Librarian.
The Digital Services Librarian:
- participates in the planning, implementation and maintenance of the library’s core digital services including the Archival Management System, Digital Repository, Integrated Library System, and the OpenURL Resolver;
- ensures the interoperability of distributed library systems containing digital projects, specialized collections, finding aids, licensed resources and educational and instructional resources; and
- Communicates information on digital library activities to Library staff and the University Community
The Digital Curation Centre has released a new briefing paper on "Database Archiving."
Here's an excerpt:
Database archiving is usually seen as a subset of data archiving. In a computational context, data archiving means to store electronic documents, data sets, multimedia files, and so on, for a period of time. The primary goal is to maintain the data in case it is later requested for some particular purpose. Complying with government regulations on data preservation are for example a main driver behind data archiving efforts. Database archiving focuses on archiving data that are maintained under the control of a database management system and structured under a database schema, e.g., a relational database.
Here's an excerpt:
Over the last couple of years there has been substantial discussion about the licensing (or not) of (open) data and what "open" should mean. In this debate there two distinct, but related, strands:
- Some people have argued that licensing is inappropriate (or unnecessary) for data.
- Disagreement about what "open" should mean. Specifically: does openness allow for attribution and share-alike "requirements" or should "open" data mean "public domain" data?
These points are related because arguments for the inappropriateness of licensing data usually go along the lines: data equates to facts over which no monopoly IP rights can or should be granted; as such all data is automatically in the public domain and hence there is nothing to license (and worse "licensing" amounts to an attempt to "enclose" the public domain).
However, even those who think that open data can/should only be public domain data still agree that it is reasonable and/or necessary to have some set of community "rules" or "norms" governing usage of data. Therefore, the question of what requirements should be allowed for "open" data is a common one, whatever one"s stance on the PD question.
Here's an excerpt:
If I had to predict some interesting things for the future in the area of access, I'd sum it up in one word: scale. Big, massive, scale. That's what digitization brings—access to far, far more cultural heritage materials than you could ever access before. If you're a scholar of, say, 19th century British literature, how does your work change when, for the first time, you have every book from your era at your fingertips? Far more books than you could ever read in your lifetime. How does this scale change things? How might quantitative tech-based methodologies like data mining help you to better understand a giant corpus? Help you zero in on issues?
The Digital Library Federation has published Future Directions in Metadata Remediation for Metadata Aggregators.
Here's an excerpt:
With support from The Gladys Krieble Delmas Foundation, the Digital Library Federation embarked on a project to inventory existing tools and services for metadata mapping, remediation, and enhancement. Once identified, tools were evaluated for general applicability across digital library and other cultural heritage environments.
The results of the research show that a handful of tools are usable as-is, but many tools need more work to be generally applicable in a variety of environments and significant development would be required to create a robust and well-defined set of metadata remediation services. Key points of note:
- Relatively few tools are available that can work directly on metadata records rather than full text, and those that are available need to be customized for each aggregator.
- Workable tools are available for date normalization, and also for normalizing and matching coordinates to U.S. geographic names.
- A statistical topic model program for subject clustering has been developed.
- Both named entity and topical keyword extraction remain problematic, with a fairly high percentage of errors.
- Authority files may be used to break up pre-coordinated Library of Congress subject strings into topical, name, geographic, temporal, and genre facets to improve searching.
- Mappings between different thesauri, which should allow for better search processing in aggregations containing multiple subject vocabularies, are still under development.
- Infrastructure for work collocation, appropriate to aggregators with significant published materials, is still underdeveloped and will probably need to wait for the widespread adoption of the new standard for resource description, Resource Description and Access (RDA).
- Unambiguous identifiers for entities such as names and works would be useful when the community infrastructure is developed, but are not yet supported by most metadata formats.
- Unambiguous, machine-actionable rights statements are also an area where the community infrastructure is still under development.
Michael Brewer and the ALA Office for Information Technology Policy have released the Section 108 Spinner, an interactive tool that provides information about Section 108 of the U.S. Copyright Code.
The New York Public Library is recruiting an Application Developer.
Here's an excerpt from the ad:
Under the general direction of the under the direction of the Managing Director of the Digital Labs, develops applications and provides maintenance and support for the ongoing deployment of digital library collection user interfaces and database applications. In collaboration with other digital library technical and managerial staff, develops software tools for digital library collection development and user access. Advises and supports technical developers on best coding practices and industry standards. Builds and supports software libraries for use by all developers in Digital Library Program. Establishes software packages for middleware than can be centrally accessed to ensure security for database access. Supervises other technical developers as needed. Performs other duties as required.
The MIT Libraries are recruiting a Web Developer (two-year term appointment with the possibility of extension).
Here's an excerpt from the ad:
The MIT Libraries are seeking an experienced web developer to join the team designing, building and supporting its production systems and services, which include the libraries' website (http://libraries.mit.edu/), a meta-search portal to licensed content, the open source software digital library and archiving system called DSpace (http://dspace.org), data visualization tools from the SIMILE (http://simile.mit.edu) project, and several other systems. The developer will be responsible for all aspects of requirements gathering, technical analysis and development, testing and documenting customer-facing applications, working alone or as a member of a team. The position, which reports to the Head of Software Development in the Libraries, requires a knowledgeable, enthusiastic, and self-motivated individual with extensive experience in user interface design on the web and thorough grounding in HCI principles and practices.
Guy Pessach has made "Reciprocal Share-Alike Exemptions in Copyright Law" available in SSRN.
Here's an excerpt from the abstract:
This article introduces a novel element to copyright law's exemptions' scheme, and particularly the fair use doctrine-a reciprocal share-alike requirement. I argue that beneficiaries of a copyright exemption should comply with a complementary set of ex-post reciprocal share-alike obligations that come on top of the exemption that they benefit from. Among other aspects, reciprocal share-alike obligations may trump contractual limitations and technological protection measures that are imposed by parties who relied on a copyright exemption in the course of their own use of copyrighted materials. Thus, fair use beneficiaries should be obliged to treat alike subsequent third parties who wish to access and use copyrighted materials—now located in their new "hosting institution"—for additional legitimate uses.
For example, if Google argues that its Book Project's scanning of entire copyrighted works are fair use, a similar exemption should apply to the benefit of future third parties who wish to use, for similar socially valuable purposes and under similar limitations, digital copies of books from Google's databases and applications. Google should also be prohibited from imposing technological protection measures and contractual obligations that revoke its reciprocal share-alike obligations.