A series of postings on the JISC Libraries of the Future Weblog document two debates: Revolution or Evolution: The JISC National E Textbook Debate and From eLib to the Library of the Future.
Here are the postings in chronological order:
A series of postings on the JISC Libraries of the Future Weblog document two debates: Revolution or Evolution: The JISC National E Textbook Debate and From eLib to the Library of the Future.
Here are the postings in chronological order:
The California Digital Library has added a UC Libraries Mass Digitization Projects page to its Inside CDL Web site.
The Web site includes links to Frequently Asked Questions, contracts with digitization partners, and other information.
Of special interest in the FAQ are the questions "What rights to the digitized content does UC have in the projects; will access be limited in any way?" and "How will our patrons be able to access these texts, i.e. through MELVYL, or local catalogs, or a webpage, any search engine, or….?"
Project reports from the Andrew W. Mellon Foundation's 2008 Research in Information Technology retreat are now available.
Here are selected project briefing reports:
The Institute for the Future of the Book has released version 1.0 of Sophie, an open source tool for creating and reading multimedia networked documents.
Here's an excerpt from the announcement:
Sophie is software for writing and reading rich media documents in a networked environment.
Sophie’s goal is to open up the world of multimedia authoring to a wide range of people and institutions and in so doing to redefine the notion of a book or "academic paper" to include both rich media and mechanisms for reader feedback and conversation in dynamic margins.
Read more about Sophie at "Sophie Project Gets $1 Million from Macarthur Foundation," the Sophie documentation, and the Sophie tutorials.
The National Endowment for the Humanities and the Virginia Foundation for the Humanities have published Supporting Digital Scholarly Editions: A Report on the Conference of January 14, 2008, which was written by Ithaka staff.
Here'a an excerpt from the "Introduction":
On January 14, 2008, a group of editors, representatives from university presses, and other stakeholders met to discuss the future of scholarly editions and how they might best be supported in the digital age. . . . .
The objectives of the meeting were:
- To identify services and tools that are critical for supporting digital documentary editions;
- To assess the need for a service provider to facilitate the production of these editions; and
- To articulate the key uncertainties involved in creating such a service provider, so that those can be further investigated.
This report documents the workshop, with the goal of providing a reference not only for participants, but also for others in the community who are concerned with the future of scholarly editions. It is divided into three sections that follow the course of the day itself:
- Developing a vision for the next generation scholarly edition
- How do we get there? Identifying needs and gaps
- Creating a service provider for scholarly editions
The New York University Division of Libraries and the Institute for the Future of the Book will work together to develop new digital scholarly communications tools.
Here's an excerpt from the press release:
"We are constantly watching the unfolding digital landscape for new paths we might want to take," said Carol A. Mandel, dean of the NYU Libraries. "IFB is a thought leader in the future of scholarly communication. We will work together to develop new software and new options that faculty can use to pubish, review, share, and collaborate at NYU and in the larger academic community."
For the past three years, IFB has been researching, prototyping, and sketching out models for how university presses could expand their publishing programs to include digital and networked formats. IFB is best known for its series of "networked book" experiments, which modify popular blogging technologies to create social book formats for the Web. Among these are: "Without Gods" by NYU’s Mitchell Stephens, "The Googlization of Everything" by Siva Vaidhyanathan, "Gamer Theory" by McKenzie Wark (the first fully networked digital monograph), and "Expressive Processing" by Noah Wardrip-Fruin, which is currently undergoing the first blog-based peer review.
Out of these projects, IFB developed CommentPress, an extension for the WordPress blog platform that enables paragraph-level commenting in the margins of a text. IFB is also at work on a powerful open source digital authoring environment called Sophie, the first version of which has just been released.
"We are thrilled to be working with NYU," said IFB Director Bob Stein. "We now have the benefit not only of the Libraries’ first-rate technical support, but also of working with world-class faculty, many of whom are leading innovators in digital scholarly communications."
In an auspicious start to their partnership, NYU Libraries and IFB have been awarded a start-up grant from the National Endowment for the Humanities (NEH) to design a set of networking tools that will serve as the membership system for MediaCommons, an all-electronic scholarly publishing network in the digital humanities that IFB has been instrumental in developing.
Under the agreement, three of IFB’s leaders will serve as visiting scholars at NYU. They are Bob Stein; Ben Vershbow, IFB editorial director; and researcher Dan Visel. They will work with NYU librarians; with the digital library team, headed by James Bullen; and with Monica McCormick, the Libraries’ program officer for digital scholarly publishing.
Read more about it at "Major News: IFB and NYU Libraries to Collaborate."
Gail Rebuck, Chairman and Chief Executive of The Random House Group, recently delivered the Stationers' Company Annual Lecture on "New Chapter or Last Page? Publishing Books in a Digital Age." Among other topics in this interesting, wide-ranging presentation, she discussed publishers' digital copyright concerns and Google Book Search, including saying:
Piracy threatens to erode the copyright protection that is the cornerstone of our creative industries and their successful exports. Vigilant policing and joined-up legislation across all countries is essential. Education is vital, too, to show that these crimes are in no sense 'victimless,' however harmless they may seem. Indifference to copyright protection and copyright worth will prove highly destructive. . . .
For texts held in the public domain the project [Google Book Search], seems entirely laudable, even exciting, since it brings an inconceivably rich library to anyone's desktop. But Google's initial willingness to capture copyrighted works without first asking permission was, to say the least, surprising. . . .
Google’s attitude towards copyright is merely a corporate expression of the individualist, counter-cultural attitudes of many of the Internet pioneers. As Stewart Brand, author of The Whole Earth Catalog once declared, 'information wants to be free.'
Google has released the Google Book Search Book Viewability API.
Here's an excerpt from the API home page:
The Google Book Search Book Viewability API enables developers to:
- Link to Books in Google Book Search using ISBNs, LCCNs, and OCLC numbers
- Know whether Google Book Search has a specific title and what the viewability of that title is
- Generate links to a thumbnail of the cover of a book
- Generate links to an informational page about a book
- Generate links to a preview of a book
Read more about it at "Book Info Where You Need It, When You Need It."
France's Gallica 2 digital book project will go live after the Paris Book Fair, which ends on March 19th. Initially, it will contain 62,000 digital works, mostly from the Bibliothèque Nationale de France. Publishers will have the option to charge various kinds of access fees.
Read more about it at "France Launches Google Books Rival."
TRLN (Triangle Research Libraries Network) has announced that its member libraries (Duke University, North Carolina Central University, North Carolina State University, and The University of North Carolina at Chapel Hill) have joined the Open Content Alliance.
Here's an excerpt from "TRLN Member Libraries Join Open Content Alliance":
In the first year, UNC Chapel Hill and North Carolina State University will each convert 2,700 public domain books into high-resolution, downloadable, reusable digital files that can be indexed locally and by any web search engine. UNC Chapel Hill and NCSU will start by each hosting one state-of-the-art Scribe machine provided by the Internet Archive to scan the materials at a cost of just 10 cents per page. Each university library will focus on historic collection strengths, such as plant and animal sciences, engineering and physical science at NCSU and social sciences and humanities at UNC-Chapel Hill. Duke University will also contribute select content for digitization during the first year of the collaborative project.
The wholesale e-book market continues to expand, up about 24% in 2007 with $31.7 million in sales (chart).
Read more about it at "U.S. 2007 Wholesale E-Book Sales: $31.7 M, or 23.6 Percent over 2006—but Should They Have Been Still Higher?"
The University of Michigan Library has digitized and made available one million books from its collection.
Here's an excerpt from "One Million Digitized Books":
One million is a big number, but this is just the beginning. Michigan is on track to digitize its entire collection of over 7.5 million bound volumes by early in the next decade. So far we have only glimpsed the kinds of new and innovative uses that can be made of large bodies of digitized books, and it is thrilling to imagine what will be possible when nearly all the holdings of a leading research library are digitized and searchable from any computer in the world.
The Columbia University Libraries have announced that they will work with Microsoft to digitize a "large number of books" that are in the public domain.
Here's an excerpt from the press release:
Columbia University and Microsoft Corp. are collaborating on an initiative to digitize a large number of books from Columbia University Libraries and make them available to Internet users. With the support of the Open Content Alliance (OCA), publicly available print materials in Columbia Libraries will be scanned, digitized, and indexed to make them readily accessible through Live Search Books. . . .
Columbia University Libraries is playing a key role in book selection and in setting quality standards for the digitized materials. Microsoft will digitize selected portions of the Libraries’ great collections of American history, literature, and humanities works, with the specific areas to be decided mutually by Microsoft and Columbia during the early phase of the project.
Microsoft will give the Library high-quality digital images of all the materials, allowing the Library to provide worldwide access through its own digital library and to share the content with non-commercial academic initiatives and non-profit organizations.
Read more about it at "Columbia University Joins Microsoft Scan Plan."
In "Reading Bad News Between the Lines of Google Book Search" (Chronicle of Higher Education subscription required), Peter Brantley, Executive Director of the Digital Library Federation, discusses his concerns about Google Book Search.
Here's an excerpt:
Q. Why are you concerned about Google Book Search?
A. The quality of the book scans is not consistently high. The algorithm Google uses to return search results is opaque. Then there's the commercial aspect. Google will attempt to find ways to make money off the service.
PublicDomainReprints.org is offering an experimental service that allows users to convert about 1.7 million digital public domain books in the Internet Archive, Google Book Search, or the Universal Digital Library into printed books using the Lulu print-on-demand service.
Source: "Converting Google Book PDFs to Actual Books."
Both the Columbia University Libraries and Bavarian State Library have joined the Google Book Search Library Project.
Here are the announcements:
The University of Michigan Libraries have made over 100,000 metadata records from its MBooks collection available for OAI-PMH harvesting. The records are for digitized books in the public domain.
Here's an excerpt from the announcement:
The University of Michigan Library is pleased to announce that records from our MBooks collection are available for OAI harvesting. The MBooks collection consists of materials digitized by Google in partnership with the University of Michigan.
http://quod.lib.umich.edu/cgi/o/oai/oai?verb=Identify
Only records for MBooks available in the public domain are exposed. We have split these into sets containing public domain items according to U.S. copyright law, and public domain items worldwide. There are currently over 100,000 records available for harvesting. We anticipate having 1 million records available when the entire U-M collection has been digitized by Google.
Thanks to a million dollar grant from the Macarthur Foundation, version 1.0 of Sophie, software that allows non-programmers to easily create multimedia documents, will be released in February 2008. Sophie runs on Mac, Windows and Linux operating systems. An alpha version and several demo books created with Sophie are available.
Here's an excerpt the project's home page:
Originally conceived as a standalone multimedia authoring tool, Sophie is now integrated into the Web 2.0 network in some very powerful ways:
- Sophie documents can be uploaded to a server and then streamed over the net
- It's possible to embed remote audio, video and graphic text files in the pages of Sophie documents meaning that the actual document that needs to be distributed might be only a few hundred kilobytes even if the book itself is comprised of hundreds of megabytes or even a few gigabytes.
- Sophie now has the ability to browse OKI (open knowledge initiative) repositories from within Sophie itself and then to embed objects from those repositories.
- We now have live dynamic text fields (similar to the Institute's CommentPress experiments on the web) such that a comment written in the margin is displayed immediately in every other copy of that book—anywhere in the world.
The University of Pittsburgh University Library System and the University of Pittsburgh University Press have established the University of Pittsburgh University Press Digital Editions, which offers free access to digitized versions of print books from the press.
Here's an excerpt from the press release:
The University of Pittsburgh’s University Library System (ULS) and University Press have formed a partnership to provide digital editions of press titles as part of the library system’s D-Scribe Digital Publishing Program. Thirty-nine books from the Pitt Latin American Series published by the University of Pittsburgh Press are now available online, freely accessible to scholars and students worldwide. Ultimately, most of the Press’ titles older than 2 years will be provided through this open access platform.
For the past decade, the University Library System has been building digital collections on the Web under its D-Scribe Digital Publishing Program, making available a wide array of historical documents, images and texts which can be browsed by collection and are fully searchable. The addition of the University of Pittsburgh Press Digital Editions collection marks the newest in an expanding number of digital collaborations between the University Library System and the University Press.
The D-Scribe Digital Publishing Program includes digitized materials drawn from Pitt collections and those of other libraries and cultural institutions in the region, pre-print repositories in several disciplines, the University’s mandatory electronic theses and dissertations program, and electronic journals during the past eight years, sixty separate collections have been digitized and made freely accessible via the World Wide Web. Many of these projects have been carried out with content partners such as Pitt faculty members, other libraries and museums in the area, professional associations, and most recently, with the University of Pittsburgh Press with several professional journals and the new University of Pittsburgh Press Digital Editions. . . .
More titles will be added to the University of Pittsburgh Press Digital Editions each month until most of the current scholarly books published by the Press are available both in print and as digital editions. The collection will eventually include titles from the Pitt Series in Russian and East European Studies, the Pitt-Konstanz Series in the Philosophy and History of Science, the Pittsburgh Series in Composition, Literacy, and Culture, the Security Continuum: Global Politics in the Modern Age, the History of the Urban Environment, back issues of Cuban Studies, and numerous other scholarly titles in history, political science, philosophy, and cultural studies.
The free JISC Academic Database Assessment Tool allows users to compare journal title lists, journal database capabilities, and e-book database capabilities for selected e-resource products and systems. For example, the user can compare the functionality of ebrary with that of NetLibrary.
Here's an excerpt from the press release:
With so many products offering a huge diversity and wealth of information, it can be difficult for librarians to know what resources they should be investing in. The Academic Database Assessment Tool provides access to detailed information and title lists for major bibliographic and full text databases. It also delivers key service information for database and e-Book content platforms. This enables librarians to quickly compare and contrast key items to assist in the purchase decision process. These include: a list of titles included in each database; search features available; linking methods e.g. full text linking; metadata standards and methods of access provided to these resources e.g. IP access, Athens or Shibboleth.
Prompted by the strong support from university librarians in the UK, a prototype version of this tool was launched at the end of 2006. Sponsorship from IBSS, Thomson Scientific, Elsevier and ProQuest means that this tool has been further developed from the beta stage of its development and continues to remain freely available.
As the information for this tool has been provided directly by the relevant content suppliers and publishers, librarians will have the opportunity to access the latest information on the resources they already subscribe to. Librarians can subscribe to the email altering service notifying them when suppliers update their listings.
Amazon has launched Kindle, its e-book reader.
Here's a selection of articles and postings:
In "On Being in Bed with Google," Paul N. Courant, University Librarian and Dean of Libraries at the University of Michigan, vigorously rebuts arguments against research libraries participating in the Google Books Library Project.
Here's an excerpt:
Since 2005, Siva Vaidhyanathan has been making and refining the argument that libraries should be digitizing their collections independently, without corporate financing or participation, and that those who don’t are failing to uphold their responsibility to the public. "Libraries should not be relinquishing their core duties to private corporations for the sake of expediency."
"Expediency" is a bit of a dirty word. Vaidhyanathan’s phrase suggests that good people don’t do things simply because they are "expedient." But I view large-scale digitization as expeditious. We have a generation of students who will not find valuable scholarly works unless they can find them electronically. At the rate that OCA is digitizing things (and I say the more the merrier and the faster the better) that generation will be dandling great-grandchildren on its knees before these great collections can be found electronically. At Michigan, the entire collection of bound print will be searchable, by anyone in the world, about when children born today start kindergarten.
Jim Ashling provides an update on the progress that the British Public Library and Microsoft have made in their project to digitize about 100,000 books for access in Live Book Search in his Information Today article "Progress Report: The British Library and Microsoft Digitization Partnership."
Here's an excerpt from the article:
Unlike previous BL digitization projects where material had been selected on an item-by-item basis, the sheer size of this project made such selectivity impossible. Instead, the focus is on English-language material, collected by the BL during the 19th century. . . .
Scanning produces high-resolution images (300 dpi) that are then transferred to a suite of 12 computers for OCR (optical character recognition) conversion. The scanners, which run 24/7, are specially tuned to deal with the spelling variations and old-fashioned typefaces used in the 1800s. The process creates multiple versions including PDFs and OCR text for display in the online services, as well as an open XML file for long-term storage and potential conversion to any new formats that may become future standards. In all, the data will amount to 30 to 40 terabytes. . . .
Obviously, then, an issue exists here for a collection of 19th-century literature when some authors may have lived beyond the late 1930s [British/EU law gives authors a copyright term of life plus 70 years]. An estimated 40 percent of the titles are also orphan works. Those two issues mean that item-by-item copyright checking would be an unmanageable task. Estimates for the total time required to check on the copyright issues involved vary from a couple of decades to a couple of hundred years. The BL’s approach is to use two databases of authors to identify those who were still living in 1936 and to remove their work from the collection before scanning. That, coupled with a wide publicity to encourage any rights holders to step forward, may solve the problem.
Boston Public Library has made public its digitization contract with the Open Content Alliance.
Some of the most interesting provisions include the intent of the Internet Archive to provide perpetual free and open access to the works, the digitization cost arrangements (BPL pays for transport and provides bibliographic metadata, the Internet Archive pays for digitization-related costs), the specification of file formats (e.g., JPEG 2000, color PDF, and various XML files), the provision of digital copies to BPL (copies are available immediately after digitization for BPL to download via FTP or HTTP within 3 months), and use of copies (any use by either party as long as provenance metadata and/or bookplate data is not removed).
The Yale University Library and Microsoft will work together to digitize 100,000 English-language out-of-copyright books, which will be made available via Microsoft’s Live Search Books.
Here’s an excerpt from the press release:
The Library and Microsoft have selected Kirtas Technologies to carry out the process based on their proven excellence and state-of-the art equipment. The Library has successfully worked with Kirtas previously, and the company will establish a digitization center in the New Haven area. . . .
The project will maintain rigorous standards established by the Yale Library and Microsoft for the quality and usability of the digital content, and for the safe and careful handling of the physical books. Yale and Microsoft will work together to identify which of the approximately 13 million volumes held by Yale’s 22 libraries will be digitized. Books selected for digitization will remain available for use by students and researchers in their physical form. Digital copies of the books will also be preserved by the Yale Library for use in future academic initiatives and in collaborative scholarly ventures.