Lund University Journal Info Database Now Available

Lund University Libraries, creators of the Directory of Open Access Journals, has released a new database called Journal Info, which provides authors with information about 18,000 journals selected from 30 major databases. The National Library of Sweden provides support for JI, which is under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported license.

Here’s an excerpt from the FAQ page:

The purpose [of the service] is to provide an aid for the researcher in the selection of journal for publication. The publication market has continuously grown more and more complex. It is important to weigh in facts like scope and quality, but more recently also information about reader availability and library cost. The Lund University Libraries have made an attempt to merge all there items into one tool, giving the researcher the power to make informed choices.

Journal Info records provide basic information about the journal (e.g. journal homepage), "reader accessibility" information (e.g., open access status), and quality information (e.g., where it is indexed).

DSpace How-To Guide

Tim Donohue, Scott Phillips, and Dorothea Salo have published DSpace How-To Guide: Tips and Tricks for Managing Common DSpace Chores (Now Serving DSpace 1.4.2 and Manakin 1.1).

This 55-page booklet, which is under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 License, will be a welcome addition to the virtual bookshelves of institutional repository managers struggling with the mysteries of DSpace.

DRAMA Releases Fedora Front-End Beta for Authentication/Full-Text Search

DRAMA (Digital Repository Authorization Middleware Architecture) has released Fiddler, a beta version of its mura Fedora front-end that provides access control, authentication, full-text searching and a variety of other functions. DRAMA is a sub-project of RAMP (Research Activityflow and Middleware Priorities Project).

Here’s an excerpt from the news item that describes Fiddler’s features:

  • Hierarchical access control enforcement: Policies can be applied at the collection level, object level or datastream level. . . .
  • Improved access control interface: One can now view existing access control of a particular user or group for a given datastream, object or collection. . . .
  • User-centric GUI: mura only presents users with operations for which they have permissions.
  • XForms Metadata Input: We employ an XForms engine (Orbeon) for metadata input. XForms allow better user interaction, validation and supports any XML-based metadata schemas (such as MARC or MODS).
  • LDAP Filter for Fedora: The current Fedora LDAP filter (in version 2.2) does not authenticate properly, so we have developed a new LDAP filter to fix this problem.
  • Local authentication for DAR and ASM: In addition to Shibboleth authentication, the DAR and ASM can be configured to use a local authentication source (eg. via a local LDAP).
  • Generic XACML Vocabulary: XACML policies are now expressed in a generic vocabulary rather than Fedora specific ones. . . .
  • XACML Optimization: We have optimized of the evaluation engine by employing a cache with user configurable time-to-live. We have also greatly reduced the time for policies matching with DB XML, through the use of bind parameters in our queries.
  • Flexible mapping of Fedora actions to new Apache Axis handlers: Axis is the SOAP engine that Fedora employs to provide its web services. The new flexibility allows new handlers to be easily plugged into Fedora to support new features that follow the same Interceptor pattern as our authorization framework.
  • Version control: mura now supports version control.
  • Full-text search: We enabled full-text search by incorporating Fedoragsearch package.

Web/Web 2.0 Toolkits

Here’s a list of a few information-packed directories of Web/Web 2.0 tools that developers may find useful.

Remembering Mosiac, the Web Browser That Changed Everything

If you have never had to use a standalone FTP client, a standalone Telnet client, a Gopher client, or a standalone USENET client, it might be hard to imagine what the Internet was like before Mosiac, the Web browser that put the World-Wide Web on the map and transformed the Internet (and the world). Go dig up a copy of The Internet for Everyone: A Guide for Users and Providers out of your library’s stacks, dust it off, and marvel at how far we have come since 1993. You’ll also meet Archie, Veronica, and WAIS, the Googles of their day.

Another way to travel back in time is to read PC Magazine‘s 1994 review of the NCSA Mosaic for Windows, and, if you really want a history lesson, download Mosaic from the National Center for Supercomputing Applications (yes, it’s still available). Also take a look at the NCSA’s About NCSA Mosaic page.

To finish off your journey to the Internet’s Paleolithic age, check out the Timeline of Web Browsers and Hobbes’ Internet Timeline v8.2.

Of course, if you do remember these seemingly ancient technologies, you can easily imagine how primitive today’s hot technologies, such as Web 2.0, will seem in 14 years, and you may wonder whether future generations will remember them clearly or as a minor footnote in technological history.

Report of the Sustainability Guidelines for Australian Repositories Project (SUGAR)

The Australian Partnership for Sustainable Repositories (APSR) has released Report of the Sustainability Guidelines for Australian Repositories Project (SUGAR).

Here’s an excerpt from the report:

The Sustainability Guidelines for Australian Repositories service (SUGAR)was intended to support people working in tertiary education institutions whose activities do not focus on digital preservation. The target community creates and digitises content for a range of purposes to support learning, teaching and research. While some have access to technical and administrative support many others may not be aware of what they need to know. The typical SUGAR user may have little interest in discussions surrounding metadata, interoperability or digital preservation, and may simply want to know the essential steps involved in achieving the task at hand.

A key challenge for SUGAR was to provide a suitable level and amount of information to meet the immediate focus of the user and their level of expertise while introducing and encouraging consideration of issues of digital sustainability. SUGAR was also intended to stand alone as an online service unsupported by a helpdesk.

Towards an Open Source Repository and Preservation System

The UNESCO Memory of the World Programme, with the support of the Australian Partnership for Sustainable Repositories, has published Towards an Open Source Repository and Preservation System: Recommendations on the Implementation of an Open Source Digital Archival and Preservation System and on Related Software Development.

Here’s an excerpt from the Executive Summary and Recommendations:

This report defines the requirements for a digital archival and preservation system using standard hardware and describes a set of open source software which could used to implement it. There are two aspects of this report that distinguish it from other approaches. One is the complete or holistic approach to digital preservation. The report recognises that a functioning preservation system must consider all aspects of a digital repositories; Ingest, Access, Administration, Data Management, Preservation Planning and Archival Storage, including storage media and management software. Secondly, the report argues that, for simple digital objects, the solution to digital preservation is relatively well understood, and that what is needed are affordable tools, technology and training in using those systems.

An assumption of the report is that there is no ultimate, permanent storage media, nor will there be in the foreseeable future. It is instead necessary to design systems to manage the inevitable change from system to system. The aim and emphasis in digital preservation is to build sustainable systems rather than permanent carriers. . . .

The way open source communities, providers and distributors achieve their aims provides a model on how a sustainable archival system might work, be sustained, be upgraded and be developed as required. Similarly, many cultural institutions, archives and higher education institutions are participating in the open source software communities to influence the direction of the development of those softwares to their advantage, and ultimately to the advantage of the whole sector.

A fundamental finding of this report is that a simple, sustainable system that provides strategies to manage all the identified functions for digital preservation is necessary. It also finds that for simple discrete digital objects this is nearly possible. This report recommends that UNESCO supports the aggregation and development of an open source archival system, building on, and drawing together existing open source programs.

This report also recommends that UNESCO participates through its various committees, in open source software development on behalf of the countries, communities, and cultural institutions, who would benefit from a simple, yet sustainable, digital archival and preservation system. . . .

POD for Library Users: New York Public Library Tries Espresso Book Machine

The New York Public Library’s Science, Industry, and Business Library has installed an Espresso Book Machine for public use through August.

Here’s an excerpt from the press release:

The first Espresso Book Machine™ ("the EBM") was installed and demonstrated today at the New York Public Library’s Science, Industry, and Business Library (SIBL). The patented automatic book making machine will revolutionize publishing by printing and delivering physical books within minutes. The EBM is a product of On Demand Books, LLC ("ODB"—www.ondemandbooks.com). . .

The Espresso Book Machine will be available to the public at SIBL through August, and will operate Monday-Saturday from 1 p.m. to 5 p.m. . . .

Library users will have the opportunity to print free copies of such public domain classics as "The Adventures of Tom Sawyer" by Mark Twain, "Moby Dick" by Herman Melville, "A Christmas Carol" by Charles Dickens and "Songs of Innocence" by William Blake, as well as appropriately themed in-copyright titles as Chris Anderson’s "The Long Tail" and Jason Epstein’s own "Book Business." The public domain titles were provided by the Open Content Alliance ("OCA"), a non-profit organization with a database of over 200,000 titles. The OCA and ODB are working closely to offer this digital content free of charge to libraries across the country. Both organizations have received partial funding from the Alfred P. Sloan Foundation. . . .

The EBM’s proprietary software transmits a digital file to the book machine, which automatically prints, binds, and trims the reader’s selection within minutes as a single, library-quality, paperback book, indistinguishable from the factory-made title.

Unlike existing print on demand technology, EBM’s are fully integrated, automatic machines that require minimal human intervention. They do not require a factory setting and are small enough to fit in a retail store or small library room. While traditional factory based print on demand machines usually cost over $1,000,000 per unit, the EBM is priced to be affordable for retailers and libraries. . . .

Additional EBM’s will be installed this fall at the New Orleans Public Library, the University of Alberta (Canada) campus bookstore, the Northshire Bookstore in Manchester, Vermont, and at the Open Content Alliance in San Francisco. Beta versions of the EBM are already in operation at the World Bank Infoshop in Washington, DC and the Bibliotheca Alexandrina (The Library of Alexandria, Egypt). National book retailers and hotel chains are among the companies in talks with ODB about ordering EBM’s in quantity.

WIPO Broadcasting Treaty on Hold

The World Intellectual Property Organization (WIPO) has decided to indefinitely postpone a November 2007 Diplomatic Conference at which the WIPO Broadcasting Treaty could have been approved.

Here’s an excerpt from the EFF’s "Blogging WIPO: Broadcasting Treaty Deferred Indefinitely" posting:

Negotiations on the proposed WIPO Broadcasting Treaty ended on Friday with some welcome news. WIPO Member States agreed to postpone the high-level intergovernmental Diplomatic Conference at which the draft treaty could have been adopted, and have moved discussions back to regular committee meetings, down a notch from the last two "Special Session" meetings. . . .

Before a Diplomatic Conference can be convened, Member States must reach agreement on the core elements of a treaty—the objectives, specific scope and object of protection. While this week’s informal session discussions may have helped clarify Member States’ positions, it does not seem to have brought them closer. There is widespread agreement amongst many Member States, public interest NGOs. libraries and the tech industry that any treaty must focus on the issue of signal theft and not the creation of exclusive rights that will harm those communities. However, it’s equally clear from this week that broadcasters will not settle for anything other than exclusive rights.

Why is this important? Here’s an excerpt from Cory Doctorow’s Boing Boing posting on the subject ("Broadcast Treaty Wounded and Dying!"):

The broadcast treaty creates a copyright-like "broadcast right," for the entities that make works available. So while copyright goes to the people who create things, broadcast rights go to people who have no creative contribution at all. Here’s how it would work: say you recorded some TV to use in your classroom. Copyright lets you do this—copyright is limited by fair use. But the broadcast right would stop you—you’d need to navigate a different and disjointed set of exceptions to broadcast rights, or the broadcaster could sue you.

That’s just for openers. The broadcast right also covers works in the public domain that no one has a copyright in—and even Creative Commons works where the creator has already given her permission for sharing! You can’t use anything that’s broadcast unless you get permission from the caster. What’s more, they’re trying to extend this to the net, making podcasting and other communications where the hoster isn’t the copyright holder (that is, where you create the podcast but someone else hosts it) into a legal minefield.

ARL’s Library Brown-Bag Lunch Series: Issues in Scholarly Communication

The Association of Research Libraries (ARL) has released a series of discussion guides for academic librarians to use with faculty. The guides are under a Creative Commons Attribution-ShareAlike 3.0 United States license.

Here’s an excerpt from the guides’ web page:

This series of Discussion Leader’s Guides can serve as a starting point for a single discussion or for a series of conversations. Each guide offers prework and discussion questions along with resources that provide further background for the discussion leader of an hour-long session.

Using the discussion guides, library leaders can launch a program quickly without requiring special expertise on the topics. A brown-bag series could be initiated by a library director, a group of staff, or by any staff person with an interest in the scholarly communication system. The only requirements are the willingness to organize the gatherings and facilitate each meeting’s discussion.

The University of Maine and Two Public Libraries Adopt Emory’s Digitization Plan

Library Journal Academic Newswire reports that the University of Maine, the Toronto Public Library, and the Cincinnati Public Library will follow Emory University’s lead and digitize public domain works utilizing Kirtas scanners with print-on-demand copies being made available via BookSurge. (Also see the press release: "BookSurge, an Amazon Group, and Kirtas Collaborate to Preserve and Distribute Historic Archival Books.")

Source: "University of Maine, plus Toronto and Cincinnati Public Libraries Join Emory in Scan Alternative." Library Journal Academic Newswire, 21 June 2007.

Dealing with Data: Roles, Rights, Responsibilities and Relationships

JISC has released its Dealing with Data: Roles, Rights, Responsibilities and Relationships: Consultancy Report, which was written as part of its Digital Repositories Programme’s Data Cluster Consultancy.

Here’s an excerpt from the Executive Summary:

This Report explores the roles, rights, responsibilities and relationships of institutions, data centres and other key stakeholders who work with data. It concentrates primarily on the UK scene with some reference to other relevant experience and opinion, and is framed as "a snapshot" of a relatively fast-moving field. . . .

The Report is largely based on two methodological approaches: a consultation workshop and a number of semi-structured interviews with stakeholder representatives.

It is set within the context of the burgeoning "data deluge" emanating from e-Science applications, increasing momentum behind open access policy drivers for data, and developments to define requirements for a co-ordinated e-infrastructure for the UK. The diversity and complexity of data are acknowledged, and developing typologies are referenced.

Scholarly Electronic Publishing Weblog Update (6/20/07)

The Scholarly Electronic Publishing Weblog, which was established in June 2001, is six years old.

The latest update of the Scholarly Electronic Publishing Weblog (SEPW) is now available, which provides information about new scholarly literature and resources related to scholarly electronic publishing, such as books, journal articles, magazine articles, technical reports, and white papers.

Especially interesting are: Australasian Digital Theses Program: Membership Survey 2006, "The Death of Metadata," "Do You Need a Copyright Librarian?," "The Evolution of Copyright," "Ghosts in the Machine: The Promise of Electronic Resource Management Tools," "Magnifying the ILS with Endeca," Project SPECTRa (Submission, Preservation and Exposure of Chemistry Teaching and Research Data): JISC Final Report, March 2007, "Providing Access to Electronic Journals in Academic Libraries: A General Survey," and "Scholarly Electronic Journal Publishing: A Study Comparing Commercial and Nonprofit/University Publishers."

For weekly updates about news articles, Weblog postings, and other resources related to digital culture (e.g., copyright, digital privacy, digital rights management, and Net neutrality), digital libraries, and scholarly electronic publishing, see the latest DigitalKoans Flashback posting.

Version 68, Scholarly Electronic Publishing Bibliography

Version 68 of the Scholarly Electronic Publishing Bibliography is now available from Digital Scholarship. This selective bibliography presents over 3,040 articles, books, and other printed and electronic sources that are useful in understanding scholarly electronic publishing efforts on the Internet.

The Scholarly Electronic Publishing Bibliography: 2006 Annual Edition is also available from Digital Scholarship. Annual editions of the Scholarly Electronic Publishing Bibliography are PDF files designed for printing.

The bibliography has the following sections (revised sections are in italics):

1 Economic Issues
2 Electronic Books and Texts
2.1 Case Studies and History
2.2 General Works
2.3 Library Issues
3 Electronic Serials
3.1 Case Studies and History
3.2 Critiques
3.3 Electronic Distribution of Printed Journals
3.4 General Works
3.5 Library Issues
3.6 Research
4 General Works
5 Legal Issues
5.1 Intellectual Property Rights
5.2 License Agreements
6 Library Issues
6.1 Cataloging, Identifiers, Linking, and Metadata
6.2 Digital Libraries
6.3 General Works
6.4 Information Integrity and Preservation
7 New Publishing Models
8 Publisher Issues
8.1 Digital Rights Management
9 Repositories, E-Prints, and OAI
Appendix A. Related Bibliographies
Appendix B. About the Author
Appendix C. SEPB Use Statistics

Scholarly Electronic Publishing Resources includes the following sections:

Cataloging, Identifiers, Linking, and Metadata
Digital Libraries
Electronic Books and Texts
Electronic Serials
General Electronic Publishing
Images
Legal
Preservation
Publishers
Repositories, E-Prints, and OAI
SGML and Related Standards

Council of Australian University Librarians ETD Survey Report

The Council of Australian University Librarians has released Australasian Digital Theses Program: Membership Survey 2006.

Here’s an excerpt from the "Key Findings" section:

1. The average percentage of records for digital theses added to ADT is 95% when digital submission is mandatory and 17% when it is not mandatory. . . .

2. 59% of respondents will have mandatory digital submission in place in 2007.

3. With this level of mandatory submission it is predicted that 60% of all theses produced in Australia and New Zealand in 2007 will have a digital copy recorded in ADT. . . .

5. The overwhelming majority of respondents offer a mediated submission service, either only having a mediated service or offering both mediated and self-submission services. When mediated and self-submission are both available, the percentage self-submitted is polarised with some achieving over a 75% self-submission rate.

6. Over half the respondents have a repository already and most are using it to manage digital theses.

7. 87% will have a repository by the end of this year, and the rest are in the initial planning stage.

CIC’s Digitization Contract with Google

Library Journal Academic Newswire has published a must-read article ("Questions Emerge as Terms of the CIC/Google Deal Become Public") about the Committee on Institutional Cooperation’s Google Book Search Library Project contract.

The article includes quotes from Peter Brantley, Digital Library Federation Executive Director, from his "Monetizing Libraries" posting about the contract (another must-read piece).

Here’s an excerpt from Brantley’s posting:

In other words—pretty much, unless Google ceases business operations, or there is a legal ruling or agreement with publishers that expressly permits these institutions (excepting Michigan and Wisconsin which have contracts of precedence) to receive digitized copies of In-Copyright material, it will be held in escrow until such time as it becomes public domain.

That could be a long wait. . . .

In an article early this year in The New Yorker, "Google’s Moon Shot," Jeffrey Toobin discusses possible outcomes of the antagonism this project has generated between Google and publishers. Paramount among them, in his mind, is a settlement. . . .

A settlement between Google and publishers would create a barrier to entry in part because the current litigation would not be resolved through court decision; any new entrant would be faced with the unresolved legal issues and required to re-enter the settlement process on their own terms. That, beyond the costs of mass digitization itself, is likely to deter almost any other actor in the market.

Report on Chemistry Teaching/Research Data and Institutional Repositories

The JISC-funded SPECTRa project has released Project SPECTRa (Submission, Preservation and Exposure of Chemistry Teaching and Research Data): JISC Final Report, March 2007.

Here’s an excerpt from the Executive Summary:

Project SPECTRa’s principal aim was to facilitate the high-volume ingest and subsequent reuse of experimental data via institutional repositories, using the DSpace platform, by developing Open Source software tools which could easily be incorporated within chemists’ workflows. It focussed on three distinct areas of chemistry research—synthetic organic chemistry, crystallography and computational chemistry.

SPECTRa was funded by JISC’s Digital Repositories Programme as a joint project between the libraries and chemistry departments of the University of Cambridge and Imperial College London, in collaboration with the eBank UK project. . . .

Surveys of chemists at Imperial and Cambridge investigated their current use of computers and the Internet and identified specific data needs. The survey’s main conclusions were:

  • Much data is not stored electronically (e.g. lab books, paper copies of spectra)
  • A complex list of data file formats (particularly proprietary binary formats) being used
  • A significant ignorance of digital repositories
  • A requirement for restricted access to deposited experimental data

Distributable software tool development using Open Source code was undertaken to facilitate deposition into a repository, guided by interviews with key researchers. The project has provided tools which allow for the preservation aspects of data reuse. All legacy chemical file formats are converted to the appropriate Chemical Markup Language scheme to enable automatic data validation, metadata creation and long-term preservation needs. . . .

The deposition process adopted the concept of an "embargo repository" allowing unpublished or commercially sensitive material, identified through metadata, to be retained in a closed access environment until the data owner approved its release. . . .

Among the project’s findings were the following:

  • it has integrated the need for long-term management of experimental chemistry data with the maturing technology and organisational capability of digital repositories;
  • scientific data repositories are more complex to build and maintain than are those designed primarily for text-based materials;
  • the specific needs of individual scientific disciplines are best met by discipline-specific tools, though this is a resource-intensive process;
  • institutional repository managers need to understand the working practices of researchers in order to develop repository services that meet their requirements;
  • IPR issues relating to the ownership and reuse of scientific data are complex, and would benefit from authoritative guidance based on UK and EU law.

PEDESTAL: Web 2.0 Meets Repositories at Loughborough

The JISC-funded Rights and Rewards Project at Loughborough University has made its proof-of-concept PEDESTAL system public, which uses Web 2.0 concepts in a learning repository.

Here’s an excerpt from the About: PEDESTAL page:

PEDESTAL is a demonstrator teaching and learning material repository. PEDESTAL is a service, which has been developed by the Engineering Centre for Excellence in Teaching and Learning (EngCETL), for Loughborough staff to share their teaching material and expertise with other peers. . . .

PEDESTAL is not just about the sharing content, each user has their own blog which can be used to capture user’s thoughts and interests. Links to useful documents or webpage’s can be recorded, which also may be of interest to many others.PEDESTAL boasts a search mechanism. Within PEDESTAL, a search may retrieve more than just content. For example, a search using the term ‘Digital Photography’ may return;

  • Items to embed in to teaching (textual resources, diagrams, images etc)
  • Items to inform the teaching and learning process (teaching exemplars, how to guides)
  • A list of people who are interested in ‘Digital Photography’
  • A list of blog postings that have the term ‘Digital Photography’ within them

PEDESTAL is much different than a Virtual Learning Environment (Learn). The latter is structured around course modules and is a mechanism which delivers teaching material (usually specific to a course or module) to students. PEDESTAL is structured around people—i.e. the users. Each user is given a personal profile page which can be customised to show their teaching and research interests.

For further information, see the About: 10 Things About PEDESTAL page.

Rome Reborn 1.0

A cross-institutional team has built a a simulation of Rome as it was in 320 A.D. called Rome Reborn 1.0.

Here’s an excerpt from the press release:

Rome’s Mayor Walter Veltroni will officiate at the first public viewing of "Rome Reborn 1.0," a 10-year project based at the University of Virginia and begun at the University of California, Los Angeles (UCLA) to use advanced technology to digitally rebuild ancient Rome. The event will take place at 2 p.m. in the Palazzo Senatorio on the Campidoglio. An international team of archaeologists, architects and computer specialists from Italy, the United States, Britain and Germany employed the same high-tech tools used for simulating contemporary cities such as laser scanners and virtual reality to build the biggest, most complete simulation of an historic city ever created. "Rome Reborn 1.0" shows almost the entire city within the 13-mile-long Aurelian Walls as it appeared in A.D. 320. At that time Rome was the multicultural capital of the western world and had reached the peak of its development with an estimated population of one million.

"Rome Reborn 1.0" is a true 3D model that runs in real time. Users can navigate through the model with complete freedom, moving up, down, left and right at will. They can enter important public buildings such as the Roman Senate House, the Colosseum, or the Temple of Venus and Rome, the ancient city’s largest place of worship.

As new discoveries are made, "Rome Reborn 1.0" can be easily updated to reflect the latest knowledge about the ancient city. In future releases, the "Rome Reborn" project will include other phases in the evolution of the city from the late Bronze Age in the 10th century B.C. to the Gothic Wars in the 6th century A.D. Video clips and still images of "Rome Reborn 1.0" can be viewed at www.romereborn.virginia.edu. . . .

The "Rome Reborn" project was begun at UCLA in 1996 by professors Favro and Frischer. They collaborated with UCLA students from classics, architecture and urban design who fashioned the digital models with continuous advice from expert archaeologists. As the project evolved, it became collaborative at an international scale. In 2004, the project moved its administrative home to the University of Virginia, while work in progress continued at UCLA. In the same year, a cooperative research agreement was signed with the Politecnico di Milano. . . .

Many individuals and institutions contributed to "Rome Reborn" including the Politecnico di Milano (http://www.polimi.it), UCLA (http://www.etc.ucla.edu/), and the University of Virginia (www.iath.virginia.edu). The advisors of the project included scholars from the Italian Ministry of Culture, the Museum of Roman Civilization (Rome), Bath University, Bryn Mawr College, the Consiglio Nazionale delle Ricerche, the German Archaeological Institute, Ohio University, UCLA, the University of Florence, the University of Lecce, the University of Rome ("La Sapienza"), the University of Virginia and the Vatican Museums.

Web 2.0 for Content for Learning and Teaching in Higher Education

JISC has released Web 2.0 for Content for Learning and Teaching in Higher Education.

Here’s an excerpt from the report’s introduction:

In the main report, we provide a discussion of Web 2.0 together with a compilation of the more commonly used systems for education. We then examine progress at four universities which have taken a strategic approach and implemented Web 2.0 services in different ways at the institutional level. This is followed by a discussion of Web 2.0 content and its creation and use, together with an identification of issues affecting content creation and use. The next section considers the ways in which Web 2.0 is being used in learning, teaching and assessment, and important issues associated with pedagogy and assessment. We then turn to institutional policy and strategy and consider ways in which Web 2.0 impacts them.

Because of the relative immaturity of the technology and experimentation with its use, it is too early to make specific recommendations in most of the areas above. Consequently we make various recommendations to the JISC as to actions to guide and help the UK HE community in its ongoing exploration, adoption and adaptation of Web 2.0 systems.

NIH Public Access Policy Mandate Needs Immediate Support

The Alliance for Taxpayer Access has issued an action alert regarding a change in the NIH Public Access Policy that would mandate deposit of articles resulting from NIH-funded research. Peter Suber has discussed this issue in relation to a call by ACRL for an NIH mandate.

Here is the alert:

The NIH Public Access Policy is currently under consideration by Congress, as part of the larger FY08 Labor/HHS, Education, and Related Agencies Appropriations Bill. The House is expected to mark up the FY08 Labor/HHS Appropriations Bill on Thursday, June 7th.

Please take action now to express your support for a shift to mandatory policy Fax your House Representative a letter as soon as possible.

Visit http://www.house.gov for contact information. Constituents of the House Appropriations Labor/HHS Subcommittee are especially encouraged to write. (http://appropriations.house.gov/Subcommittees/sub_lhhse.shtml)

For talking points and background on the NIH Public Access Policy and recent legislative measures, please see the ATA Web site at http://www.taxpayeraccess.org/nih.html.

NIH Policy Status

The House is expected to mark up the FY08 Labor/HHS Appropriations Bill within the week. The bill will then move to the full Appropriations committee. Please stand by for an announcement about House activities from the Alliance for Taxpayer Access in the coming days.

The Senate Appropriations Committee—Labor/HHS Subcommittee is expected to review their versions of appropriations bills later this month.

Emory Will Use Kirtas Scanner to Digitize Rare Books

Emory University’s Woodruff Library will use a Kirtas robotic book scanner to digitize rare books and to create PDF files that will be made available on the Internet and sold as print-on-demand books on Amazon.

Here’s an excerpt from the press release:

"We believe that mass digitization and print-on-demand publishing is an important new model for digital scholarship that is going to revolutionize the management of academic materials," said Martin Halbert, director for digital programs and systems at Emory’s Woodruff Library. "Information will no longer be lost in the mists of time when books go out of print. This is a way of opening up the past to the future."

Emory’s Woodruff Library is one of the premier research libraries in the United States, with extensive holdings in the humanities, including many rare and special collections. To increase accessibility to these aging materials, and ensure their preservation, the university purchased a Kirtas robotic book scanner, which can digitize as many as 50 books per day, transforming the pages from each volume into an Adobe Portable Document Format (PDF). The PDF files will be uploaded to a Web site where scholars can access them. If a scholar wishes to order a bound, printed copy of a digitized book, they can go to Amazon.com and order the book on line.

Emory will receive compensation from the sale of digitized copies, although Halbert stressed that the print-on-demand feature is not intended to generate a profit, but simply help the library recoup some of its costs in making out-of-print materials available.