Archive for 2006

That’s All Folks (for This Year)

Posted in Announcements, General on December 22nd, 2006

This year’s been a total bummer. Hopefully 2007 will be better.

Blogging resumes the week of 1/8/07.

Will Self-Archiving Cause Libraries to Cancel Journal Subscriptions?

Posted in E-Journals, E-Prints, Institutional Repositories, Libraries, Open Access, Publishing, Scholarly Communication on December 21st, 2006

There has been a great deal of discussion of late about the impact of self-archiving on library journal subscriptions. Obviously, this is of great interest to journal publishers who do not want to wake up one morning, rub the sleep from their eyes, and find out over their first cup of coffee at work that libraries have en masse canceled subscriptions because a "tipping point" has been reached. Likewise, open access advocates do not want journal publishers to panic at the prospect of cancellations and try to turn back the clock on liberal self-archiving policies. So, this is not a scenario that any one wants, except those who would like to simply scrap the existing journal publishing system and start over with a digital tabula rosa.

So, deep breath: Is the end near?

This question hinges on another: Will libraries accept any substitute for a journal that does not provide access to the full, edited, and peer-reviewed contents of that journal?

If the answer is "yes," publishers better get out their survival kits and hunker down for the digital nuclear winter or else change business practices to embrace the new reality. Attempts to fight back by rolling back the clock may just make the situation worse: the genie is out of the bottle.

If the answer is "no," preprints pose no threat, but postprints may under some difficult to attain circumstances.

It is unlikely that a critical mass of author created postprints (i.e., author makes the preprint look like the postprint) will ever emerge. Authors would have to be extremely motivated to have this occur. If you don’t believe me, take a Word file that you submitted to a publisher and make it look exactly like the published article (don’t forget the pagination because that might be a sticking point for libraries). That leaves publisher postprints (generally PDF files).

For the worst to happen, every author of every paper published in a journal would have to self-archive the final publisher PDF file (or the publishers themselves would have to do it for the authors under mandates).

But would that be enough? Wouldn’t the permanence and stability of the digital repositories housing these postprints be of significant concern to libraries? If such repositories could not be trusted, then libraries would have to attempt to archive the postprints in question themselves; however, since postprints are not by default under copyright terms that would allow this to happen (e.g., they are not under Creative Commons Licenses), libraries may be barred from doing so. There are other issues as well: journal and issue browsing capabilities, the value-added services of indexing and abstracting services, and so on. For now, let’s wave our hands briskly and say that these are all tractable issues.

If the above problems were overcome, a significant one remains: publishers add value in many ways to scholarly articles. Would libraries let the existing system of journal publishing collapse because of self-archiving without a viable substitute for these value-added functions being in place?

There have been proposals for and experiments with overlay journals for some time, as well other ideas for new quality control strategies, but, to date, none have caught fire. Old-fashioned peer review, copy editing and fact checking, and publisher-based journal design and production still reign, even among the vast majority of e-journals that are not published by conventional publishers. In the Internet age, nothing technological stops tens of thousands of new e-journals using open source journal management software from blooming, but they haven’t so far, have they? Rather, if you use a liberal definition of open access, there are about 2,500 OA journals—a significant achievement; however, there are questions about the longevity of such journals if they are published by small non-conventional publishers such as groups of scholars (e.g., see "Free Electronic Refereed Journals: Getting Past the Arc of Enthusiasm"). Let’s face it—producing a journal is a lot of work, even a small journal that only publishes less than a hundred papers a year.

Bottom line: a perfect storm is not impossible, but it is unlikely.

Journal 2.0: PLoS ONE Beta Goes Live

Posted in E-Journals, Open Access, Publishing, Scholarly Communication on December 21st, 2006

The Public Library of Science has released a beta version of its innovative PLoS ONE journal.

Why innovative? First, it’s a multidisciplinary scientific journal, with published articles covering subjects that range from Biochemistry to Virology. Second, it’s a participative journal that allows registered users to annotate and initiate discussions about articles. Open commentary and peer-review have been previously implemented in some e-journals (e.g, see JIME: An Interactive Journal for Interactive Media), but PLoS ONE is the most visible of these efforts and, given PLoS’s reputation for excellence, it lends credibility to a concept that has yet to catch fire in the journal publishing world. A nice feature is the “Most Annotated” tab on the home page that highlights articles that have garnered reader commentary. Third, it’s an open access journal in the full sense of the term, with all articles under the least restrictive Creative Commons license, the Creative Commons Attribution License.

The beta site is a bit slow, probably due to significant interest, so expect some initial browsing delays.

Congratulations to PLoS on PLoS ONE. It’s journal worth keeping an eye on.

Certifying Digital Repositories: DINI Draft

Posted in Disciplinary Archives, Institutional Repositories, Open Access on December 20th, 2006

The Electronic Publishing Working Group of the Deutsche Initiative für Netzwerkinformation (DINI) has released an English draft of its DINI-Certificate Document and Publication Services 2007.

It outlines criteria for repository author support; indexing; legal aspects; long-term availability; logs and statistics; policies; security, authenticity and data integrity; and service visibility. It also provides examples.

Test Driving the CrossRef Simple-Text Query Tool for Finding DOIs

Posted in Metadata, Open Access on December 20th, 2006

CrossRef has made a DOI finding tool publicly available. It’s called Simple-Text Query. You can get the details at Barbara Quint’s article "Linking Up Bibliographies: DOI Harvesting Tool Launched by CrossRef."

What caught my eye in Quint’s article was this: "Users can enter whole bibliographies with citations in almost any bibliographic format and receive back the matching Digital Object Identifiers (DOIs) for these references to insert into their final bibliographies."

Well not exactly. I cut and pasted just the "9 Repositories, E-Prints, and OAI" section of the Scholarly Electronic Publishing Bibliography into Simple-Text Query. Result: error message. I had exceeded the 15,360 character limit. So, suggestion one: put the limit on the Simple-Text Query page.

So them I counted out 15,360 characters of the section and pasted that. Just kidding. I pasted the first six references. Result?

Alexander, Martha Latika, and J. N. Gautam. “Institutional Repositories for Scholarly Communication: Indian Initiatives.” Serials: The Journal for the Serials Community 19, no. 3 (2006): 195-201.
No doi match found.

Allard, Suzie, Thura R. Mack, and Melanie Feltner-Reichert. “The Librarian’s Role in Institutional Repositories: A Content Analysis of the Literature.” Reference Services Review 33, no. 3 (2005): 325-336.

Allen, James. “Interdisciplinary Differences in Attitudes towards Deposit in Institutional Repositories.” Manchester Metropolitan University, 2005.
Reference not parsed

Allinson, Julie, and Roddy MacLeod. “Building an Information Infrastructure in the UK.” Research Information (October/November 2006).
Reference not parsed

Anderson, Greg, Rebecca Lasher, and Vicky Reich. “The Computer Science Technical Report (CS-TR) Project: A Pioneering Digital Library Project Viewed from a Library Perspective.” The Public-Access Computer Systems Review 7, no. 2 (1996): 6-26.
Reference not parsed

Andreoni, Antonella, Maria Bruna Baldacci, Stefania Biagioni, Carlo Carlesi, Donatella Castelli, Pasquale Pagano, Carol Peters, and Serena Pisani. “The ERCIM Technical Reference Digital Library: Meeting the Requirements of a European Community within an International Federation.” D-Lib Magazine 5 (December 1999).
Reference not parsed

Hmmm. According to Quint’s article:

I asked Brand if CrossRef could reach open access material. She assured me it could, but it clearly did not give the free and sometimes underdefined material any preference.

Looks like the open access capabilities may need some fine tuning. D-Lib Magazine and The Public-Access Computer Systems Review are not exactly obscure e-journals. Since my references are formatted in the Chicago style by EndNote, I don’t think that the reference format is the issue. In fact, Quint’s article says: "The Simple-Text Query can retrieve DOIs for journal articles, books, and chapters in any reference citation style, although it works best with standard styles."

Conclusion: I play with it some more, but Simple-Text Query may be best for conventional, mainstream journal references.

JSTOR to Offer Purchase of Articles by Individuals

Posted in E-Journals, General, Publishing on December 19th, 2006

Using a new purchase service, individuals will be able to purchase JSTOR articles for modest fees (currently $5.00 to $14.00) from publishers that participate in this service. JSTOR describes the service as follows:

An extension of JSTOR’s efforts to better serve scholars is a new article purchase service. This service is an opt-in program for JSTOR participating publishers and will enable them to sell single articles for immediate download. Researchers following direct links to articles will be presented with an option to purchase an article from the publisher if the publisher chooses to participate in this program and if the specific content requested is available through the program. The purchase option will only be presented if the user does not have access to the article. Prior to completing an article purchase users are prompted to first check the availability of the article through a local library or an institutional affiliation with JSTOR.

INASP Journals to Be Included in CrossRef

Posted in E-Journals, General, Scholarly Communication on December 19th, 2006

International Network for the Availability of Scientific Publications (INASP) has announced that its journals will be included in the CrossRef linking service.

In an INASP press release, Pippa Smart, INASP’s Head of Publishing Initiatives, said:

For journals that are largely invisible to most of the scientific community the importance of linking cannot be overstressed. We are therefore delighted to be working with CrossRef to promote discovery of journals published in the less developed countries. We believe that an integrated discovery mechanism which includes journals from all parts of the world is vital to global research—not only benefiting the editors and publishers with whom we work.

New Mailing List About the Audit and Certification of Digital Repositiories

Posted in Disciplinary Archives, Institutional Repositories on December 19th, 2006

The MOIMS-Repository Audit and Certification BOF mailing list (Moims-rac) has been established to foster the development of an ISO standard for the audit and certification of digital information repositories.

You Better Be Good, You Better Not Copy

Posted in Copyright on December 19th, 2006

The Wall Street Journal reports that Attributor Corp "has begun testing a system to scan the billions of pages on the Web for clients’ audio, video, images and text—potentially making it easier for owners to request that Web sites take content down or provide payment for its use."

The company will use specialized digital fingerprinting technology in its copy detection service, which will become available in the first quarter of 2007. By the end of December, it will have about 10 billion Web pages in its detection index.

An existing competing service, Copyscape, offers both free and paid copy detection.

Source: Delaney, Kevin J. "Copyright Tool Will Scan Web For Violations." The Wall Street Journal, 18 December 2006, B1.

Hear Luminaries Interviewed at the 2006 Fall CNI Task Force Meeting

Posted in Scholarly Communication on December 19th, 2006

Matt Pasiewicz and CNI have made available digital audio interviews with a number of prominent attendees at the 2006 Fall CNI Task Force Meeting. Selected interviews are below. More are available on Pasiewicz’s blog.

Version 66, Scholarly Electronic Publishing Bibliography

Posted in Bibliographies, Digital Scholarship Publications, Scholarly Communication on December 18th, 2006

Version 66 of the Scholarly Electronic Publishing Bibliography is now available. This selective bibliography presents over 2,830 articles, books, and other printed and electronic sources that are useful in understanding scholarly electronic publishing efforts on the Internet.

The SEPB URL has changed:


There is a mirror site at:

The Scholarly Electronic Publishing Weblog URL has also changed:


There is a mirror site at:

The SEPW RSS feed is unaffected.

Changes in This Version

The bibliography has the following sections (revised sections are marked with an asterisk):

Table of Contents

1 Economic Issues*
2 Electronic Books and Texts
2.1 Case Studies and History*
2.2 General Works*
2.3 Library Issues*
3 Electronic Serials
3.1 Case Studies and History*
3.2 Critiques
3.3 Electronic Distribution of Printed Journals*
3.4 General Works*
3.5 Library Issues*
3.6 Research*
4 General Works*
5 Legal Issues
5.1 Intellectual Property Rights*
5.2 License Agreements*
6 Library Issues
6.1 Cataloging, Identifiers, Linking, and Metadata*
6.2 Digital Libraries*
6.3 General Works*
6.4 Information Integrity and Preservation*
7 New Publishing Models*
8 Publisher Issues*
8.1 Digital Rights Management*
9 Repositories, E-Prints, and OAI*
Appendix A. Related Bibliographies
Appendix B. About the Author*
Appendix C. SEPB Use Statistics

Scholarly Electronic Publishing Resources includes the following sections:

Cataloging, Identifiers, Linking, and Metadata
Digital Libraries*
Electronic Books and Texts*
Electronic Serials
General Electronic Publishing
Repositories, E-Prints, and OAI*
SGML and Related Standards

Further Information about SEPB

The HTML version of SEPB is designed for interactive use. Each major section is a separate file. There are links to sources that are freely available on the Internet. It can be searched using a Google Search Engine. Whether the search results are current depends on Google’s indexing frequency.

In addition to the bibliography, the HTML document includes:

(1) Scholarly Electronic Publishing Weblog (biweekly list of new resources; also available by e-mail—see second URL—and RSS Feed—see third URL)

(2) Scholarly Electronic Publishing Resources (directory of over 270 related Web sites)

(3) Archive (prior versions of the bibliography)

The 2005 annual PDF file is designed for printing. The printed bibliography is over 210 pages long. The PDF file is over 560 KB.

Related Article

An article about the bibliography has been published in The Journal of Electronic Publishing:

Scholarly Electronic Publishing Weblog Update (12/18/06)

Posted in Announcements, General on December 18th, 2006

The latest update of the Scholarly Electronic Publishing Weblog (SEPW) is now available, which provides information about new scholarly literature and resources related to scholarly electronic publishing, such as books, journal articles, magazine articles, newsletters, technical reports, and white papers. Especially interesting are: The Complete Copyright Liability Handbook for Librarians and Educators, "Copyright Concerns in Online Education: What Students Need to Know," Digital Archiving: From Fragmentation to Collaboration, "Fixing Fair Use," "Mass Digitization of Books," MLA Task Force on Evaluating Scholarship for Tenure and Promotion, "Open Access: Why Should We Have It?," "Predictions for 2007," "Readers’ Attitudes to Self-Archiving in the UK," "The Rejection of D-Space: Selecting Theses Database Software at the University of Calgary Archives," "Taming the Digital Beast," and Understanding Knowledge as a Commons: From Theory to Practice.

The SEPW URL has changed. Use:


There is a mirror site at:

The RSS feed is unaffected.

For weekly updates about news articles, Weblog postings, and other resources related to digital culture (e.g., copyright, digital privacy, digital rights management, and Net neutrality), digital libraries, and scholarly electronic publishing, see the latest DigitalKoans Flashback posting.

Lessig’s Code: Version 2.0 Is Published

Posted in Copyright, Creative Commons/Open Licenses on December 11th, 2006

Lawrence Lessig’s Code: Version 2.0 is out. This update of the now classic Code and Other Laws of Cyberspace was written using a Wiki, with Lessig editing and refining that digital text.

The resulting book is under a Creative Commons Attribution-ShareAlike 2.5 License.

It can be freely downloaded in PDF form. Later, the final version of the book will be available on a second Wiki.

Collex: Remixable Metadata for Humanists to Create Collections and Exhibits

Posted in Digital Humanities, Metadata, Scholarly Communication on December 11th, 2006

What is Collex? The project’s About page describes it in part as follows:

Collex is a set of tools designed to aid students and scholars working in networked archives and federated repositories of humanities materials: a sophisticated COLLections and EXhibits mechanism for the semantic web.

Collex allows users to collect, annotate, and tag online objects and to repurpose them in illustrated, interlinked essays or exhibits. It functions within any modern web browser without recourse to plugins or downloads and is fully networked as a server-side application. By saving information about user activity (the construction of annotated collections and exhibits) as ‘remixable’ metadata, the Collex system writes current practice into the scholarly record and permits knowledge discovery based not only on the characteristics or ‘facets’ of digital objects, but also on the contexts in which they are placed by a community of scholars.

A detailed description of the project is available in "COLLEX: Semantic Collections & Exhibits for the Remixable Web."

You can see Collex in action at the NINES (a Networked Interface for Nineteenth-Century Electronic Scholarship) project, which also uses IVANHOE ("a shared, online playspace for readers interested in exploring how acts of interpretation get made and reflecting on what those acts mean or might mean") and Juxta ("a cross-platform tool for collating and analyzing any kind or number of textual objects").

The About 9s page identifies key objectives of the NINES project as follows:

  • It will create a robust framework to support the authority of digital scholarship and its relevance in tenure and other scholarly assessment procedures.
  • It will help to establish a real, practical publishing alternative to the paper-based academic publishing system, which is in an accelerating state of crisis.
  • It will address in a coordinated and practical way the question of how to sustain scholarly and educational projects that have been built in digital forms.
  • It will establish a base for promoting new modes of criticism and scholarship promised by digital tools.

People Metadata

Posted in Metadata on December 9th, 2006

A message by Liddy Nevile on DC-General has spawned an interesting thread about the need to have a metadata scheme that describes people. Other participants note related efforts, such as BIO, the FOAF Vocabulary Specification, GEDCOM, the North Carolina Encoded Archival Context (EAC) Project, and the XHTML Friends Network.

MLA Task Force on Evaluating Scholarship for Tenure and Promotion Report

Posted in Scholarly Communication on December 8th, 2006

The MLA Task Force on Evaluating Scholarship for Tenure and Promotion has issued an important report. (The MLA is the Modern Language Association of America.)

Here’s some background on the report from its Executive Summary:

In 2004 the Executive Council of the Modern Language Association of America created a task force to examine current standards and emerging trends in publication requirements for tenure and promotion in English and foreign language departments in the United States. The council’s action came in response to widespread anxiety in the profession about ever-rising demands for research productivity and shrinking humanities lists by academic publishers, worries that forms of scholarship other than single-authored books were not being properly recognized, and fears that a generation of junior scholars would have a significantly reduced chance of being tenured. The task force was charged with investigating the factual basis behind such concerns and making recommendations to address the changing environment in which scholarship is being evaluated in tenure and promotion decisions.

The task force made 20 key recommendations, including:

3. The profession as a whole should develop a more capacious conception of scholarship by rethinking the dominance of the monograph, promoting the scholarly essay, establishing multiple pathways to tenure, and using scholarly portfolios. . . .

4. Departments and institutions should recognize the legitimacy of scholarship produced in new media, whether by individuals or in collaboration, and create procedures for evaluating these forms of scholarship. . . .

15. The task force encourages further study of the unfulfilled parts of its charge with respect to multiple submissions of manuscripts and comparisons of the number of books published by university presses between 1999 and 2005.

16. The task force recommends establishing concrete measures to support university presses. . . .

19. The task force encourages discussion of the current form of the dissertation (as a monograph-in-progress) and of the current trends in the graduate curriculum.

Creative Commons Web Site Makeover and CC Labs

Posted in Copyright, Creative Commons/Open Licenses on December 7th, 2006

The Creative Commons has redone its Web site using WordPress and added a new feature: CC Labs, which features development projects.

Current projects include the DHTML License Chooser, the Freedoms License Generator, and the Metadata Lab. (Consulting the Creative Commons Licenses page before using these tools will give you a preview of your license options.)

The symbols used to represent the CC licenses have changed. For example, here’s the Creative Commons Attribution-NonCommercial 2.5 License symbol.

Creative Commons License

Read more about these changes in Lawrence Lessig’s blog posting.

Learning Commons Publishes "Copyright, Copyleft and Everything in Between"

Posted in Copyright, Creative Commons/Open Licenses, Open Access, Open Source Software on December 7th, 2006

The South African Learning Commons has published a multimedia introduction to copyright, open content, and open source issues for kids.

It is available for Linux, Mac, and Windows computers, and it is under the Creative Commons Attribution Share-Alike South Africa license.

STARGATE Final Report and Tools

Posted in E-Journals, OAI-PMH, Open Access, Publishing, Scholarly Communication on December 7th, 2006

The STARGATE project has issued its final report. Here’s a brief summary of the project from the Executive Summary:

STARGATE (Static Repository Gateway and Toolkit) was funded by the Joint Information Systems Committee (JISC) and is intended to demonstrate the ease of use of the Open Archives Initiative Protocol for Metadata Harvesting (OAI-PMH) Static Repository technology, and the potential benefits offered to publishers in making their metadata available in this way This technology offers a simpler method of participating in many information discovery services than creating fully-fledged OAI-compliant repositories. It does this by allowing the infrastructure and technical support required to participate in OAI-based services to be shifted from the data provider (the journal) to a third party and allows a single third party gateway provider to provide intermediation for many data providers (journals).

To support the its work, the project developed tools and supporting documentation, which can be found below:

Details on Open Repositories 2007 Talks

Posted in Institutional Repositories, Open Access, Scholarly Communication on December 7th, 2006

Details about the Open Repositories 2007 conference sessions are now available, including keynotes, poster sessions, presentations, and user groups. For DSpace, EPrints, and Fedora techies, the user group sessions look like a don’t miss with talks by luminaries such as John Ockerbloom and MacKenzie Smith. The presentations sessions include talks by Andrew Treloar, Carl Lagoze and Herbert Van de Sompel, Leslie Johnston, Simeon Warner among other notables. Open Repositories 2007 will be held in San Antonio, January 23-26.

Hopefully, the conference organizers plan to make streaming audio and/or video files available post-conference, but PowerPoints, as was the case for Open Repositories 2006, would also be useful.

DOIs for Books Gain Ground

Posted in E-Books, Metadata on November 27th, 2006

According to CrossRef, the official DOI registration agency, over a half-million DOIs have been assigned to books or book chapters, and twenty of its members are using DOIs in this fashion.

What’s a DOI? Here’s a short description from CrossRef

The DOI, or digital object identifier, serves as a persistent, actionable identifier for intellectual property online. DOIs can be assigned at any level of granularity, and therefore provide publishers with an extensible platform for a variety of applications. And DOI links don’t break. Even if a publisher needs to migrate publications from one system to another, or if the content moves from one publisher to another, the DOI never changes.

While the use of DOIs for book chapters is especially interesting, DOIs can be utilized for smaller book sections as this example of an entry for Ian Fleming in the Oxford Dictionary of National Biography illustrates. (Notice the DOI, "Ian Lancaster Fleming (1908–1964): doi:10.1093/ref:odnb/33168," at the bottom of the entry.)

International Journal of Digital Curation Launched

Posted in Digital Curation & Digital Preservation, E-Journals on November 25th, 2006

The Digital Curation Centre has launched the International Journal of Digital Curation, which will be published twice a year in digital form (articles are PDF files). It is edited by Richard Waller, who also edits Ariadne. It is published by UKOLN at the University of Bath, using Open Journal Systems.

The journal is freely available. Although individual articles in the first volume do not have copyright statements, the Submissions page on the journal Web site has the following copyright statement:

Copyright for articles published in this journal is retained by the authors, with first publication rights granted to the University of Bath. By virtue of their appearance in this open access journal, articles are free to use, with proper attribution, in educational and other non-commercial settings.

The first issue includes "Digital Curation, Copyright, and Academic Research"; "Digital Curation for Science, Digital Libraries, and Individuals"; "Scientific Publication Packages—A Selective Approach to the Communication and Archival of Scientific Output"; and other articles.

Digital Preservation via Emulation at Koninklijke Bibliotheek

Posted in Digital Curation & Digital Preservation, Emerging Technologies on November 21st, 2006

In a two-year (2005-2007) joint project with Nationaal Archief of the Netherlands, Koninklijke Bibliotheek is developing an emulation system that will allow digital objects in outmoded formats to be utilized in their original form. Regarding the emulation approach, the Koninklijke Bibliotheek says:

Emulation is difficult, the main reason why it is not applied on a large scale. Developing an emulator is complex and time-consuming, especially because the emulated environment must appear authentic en must function accurately as well. When future users are interested in the contents of a file, migration remains the better option. When it is the authentic look and feel and functionality of a file they are after, emulation is worth the effort. This can be the case for PDF documents or websites. For multimedia applications, emulation is in fact the only suitable permanent access strategy.

J. R. van der en Wijngaarden Hoeven’s paper "Modular Emulation as a Long-Term Preservation Strategy for Digital Objects" provides a overview of the emulation approach.

In a related development, a message to padiforum-l on 11/17/06 by Remco Verdegem of the Nationaal Archief of the Netherlands reported on a recent Emulation Expert Meeting, which issued a statement noting the following advantages of emulation for digital preservation purposes:

  • It preserves and permits access to each digital artifact in its original form and format; it may be the only viable approach to preserving digital artifacts that have significant executable and/or interactive behavior.
  • It can preserve digital artifacts of any form or format by saving the original software environments that were used to render those artifacts. A single emulator can preserve artifacts in a vast range of arbitrary formats without the need to understand those formats, and it can preserve huge corpuses without ever requiring conversion or any other processing of individual artifacts.
  • It enables the future generation of surrogate versions of digital artifacts directly from their original forms, thereby avoiding the cumulative corruption that would result from generating each such future surrogate from the previous one.
  • If all emulators are written to run on a stable, thoroughly-specified "emulation virtual machine" (EVM) platform and that virtual machine can be implemented on any future computer, then all emulators can be run indefinitely.

Scholarly Electronic Publishing Weblog (11/20/06)

Posted in Announcements on November 20th, 2006

The latest update of the Scholarly Electronic Publishing Weblog (SEPW) is now available, which provides information about new scholarly literature and resources related to scholarly electronic publishing, such as books, journal articles, magazine articles, newsletters, technical reports, and white papers. Especially interesting are: "Author Addenda: An Examination of Five Alternatives"; "Building Preservation Environments with Data Grid Technology"; "Improving Access to Research Results: Six Points"; "Improving Access to Research Results: What’s in It for the Institution? Can We Make the Case?"; "Is There a Viable Business Model for Commercial Open Access Publishing?"; "Library Access to Scholarship"; "The Open Access Movement in China"; and "Standards-Based Interfaces for Harvesting and Obtaining Assets from Digital Repositories."

For weekly updates about news articles, Weblog postings, and other resources related to digital culture (e.g., copyright, digital privacy, digital rights management, and Net neutrality), digital libraries, and scholarly electronic publishing, see the latest DigitalKoans Flashback posting.

New Digital Image Documentation from TASI

Posted in Digital Media, Metadata on November 16th, 2006

The Technical Advisory Service for Images (TASI) has issued new documentation dealing with digital image issues:

TASI has also created new guides to assist users in identifying appropriate materials:



Digital Scholarship

Copyright © 2005-2020 by Charles W. Bailey, Jr.

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International license.