The Long Run

Enthusiasm about new technologies is essential to innovation. There needs to be some fire in the belly of change agents or nothing ever changes. Moreover, the new is always more interesting than the old, which creaks with familiarity. Consequently, when an exciting new idea seizes the imagination of innovators and, later, early adopters (using Rogers’ diffusion of innovations jargon), it is only to be expected that the initial rush of enthusiasm can sometimes dim the cold eye of critical analysis.

Let’s pick on Library 2.0 to illustrate the point, and, in particular, librarian-contributed content instead of user-contributed content. It’s an idea that I find quite appealing, but let’s set that aside for the moment.

Overcoming the technical challenges involved, academic library X sets up on-demand blogs and wikis for staff as both outreach and internal communication tools. There is an initial frenzy of activity, and a number of blogs and wikis are established. Subject specialists start blogging. Perhaps the pace is reasonable for most to begin with, although some fall by the wayside quickly, but over time, with a few exceptions, the postings become more erratic and the time between postings increases. It is unclear whether target faculty read the blogs in any great numbers. Internal blogs follow a similar pattern. Some wikis, both internal and external, are quickly populated, but then become frozen by inactivity; others remain blank; others flourish because they serve a vital need.

Is this a story of success, failure, or the grey zone in between?

The point is this. Successful publishing in new media such as blogs and wikis requires that these tools serve a real purpose and that their contributors make a consistent, steady, and never-ending effort. It also requires that the intended audience understand and regularly use the tools and that, until these new communication channels are well-established, the library vigorously promote them because there is a real danger that, if you build it, they will not come.

Some staff will blog their hearts out irregardless of external reinforcement, but many will need to have their work acknowledged in some meaningful way, such as at evaluation, promotion, and tenure decision points. Easily understandable feedback about tool use, such as good blog-specific or wiki-specific log analysis, is important as well to give writers the sense that they are being read and to help them tailor their message to their audience.

On the user side, it does little good to say "Here’s my RSS feed" to a faculty member who doesn’t know what RSS is and could care less. Of course, some will be hip to RSS, but that may not be the majority. If the library wants RSS feeds to become part of a faculty member’s daily workflow, it is going to have to give that faculty member a good reason for it to be so, such as significant, identified RSS feed content in the faculty member’s field. Then, it is going to have to help the faculty member with the RSS transition by pointing out good RSS readers, providing tactful instruction, and offering ongoing assistance.

In spite of the feel-good glow of early success, it may be prudent not to declare victory too soon after making the leap into a major new technology. It’s a real accomplishment, but dealing with technical puzzles is often not the hardest part. The world of computers and code is a relatively ordered and predictable one; the world of humans is far more complex and unpredictable.

The real test of a new technology is in the long run: Is the innovation needed, viable, and sustainable? Major new technologies often require significant ongoing organizational commitments and a willingness to measure success and failure with objectivity and to take corrective action as required. For participative technologies such as Library 2.0 and institutional repositories, it requires motivating users as well as staff to make behavioral changes that persist long after the excitement of the new wears off.

Managing Digitization Activities, SPEC Kit 294

The Association of Research Libraries has published Managing Digitization Activities, SPEC Kit 294. The table of contents and executive summary are freely available.

Here are some highlights from the announcement:

This survey was distributed to the 123 ARL member libraries in February 2006. Sixty-eight libraries (55%) responded to the survey, of which all but two (97%) reported having engaged in digitization activities. Only one respondent reported having begun digitization activities prior to 1992; five other pioneers followed in 1992. From 1994 through 1998 there was a steady increase in the number of libraries beginning digital initiatives; 30 joined the pioneers at the rate of three to six a year. There was a spike of activity at the turn of the millennium that reached a high in 2000, when nine libraries began digital projects. Subsequently, new start-ups have slowed, with only an additional one to five libraries beginning digitization activities each year.

The primary factor that influenced the start up of digitization activities was the availability of grant funding (39 responses or 59%). Other factors that influenced the commencement of these activities were the addition of new staff with related skills (50%), staff receiving training (44%), the decision to use digitization as a preservation option (42%), and the availability of gift monies (29%). . . . .

Only four libraries reported that their digitization activities are solely ongoing functions; the great majority (60 or 91%) reported that their digitization efforts are a combination of ongoing library functions and discrete, finite projects.

Notre Dame Institutional Digital Repository Phase I Final Report

The University of Notre Dame Libraries have issued a report about their year-long institutional repository pilot project. There is an abbreviated HTML version and a complete PDF version.

From the Executive Summary:

Here is the briefest of summaries regarding what we did, what we learned, and where we think future directions should go:

  1. What we did—In a nutshell we established relationships with a number of content groups across campus: the Kellogg Institute, the Institute for Latino Studies, Art History, Electrical Engineering, Computer Science, Life Science, the Nanovic Institute, the Kaneb Center, the School of Architecture, FTT (Film, Television, and Theater), the Gigot Center for Entrepreneurial Studies, the Institute for Scholarship in the Liberal Arts, the Graduate School, the University Intellectual Property Committee, the Provost’s Office, and General Counsel. Next, we collected content from many of these groups, "cataloged" it, and saved it into three different computer systems: DigiTool, ETD-db, and DSpace. Finally, we aggregated this content into a centralized cache to provide enhanced browsing, searching, and syndication services against the content.
  2. What we learned—We essentially learned four things: 1) metadata matters, 2) preservation now, not later, 3) the IDR requires dedicated people with specific skills, 4) copyright raises the largest number of questions regarding the fulfillment of the goals of the IDR.
  3. Where we are leaning in regards to recommendations—The recommendations take the form of a "Chinese menu" of options, and the options are be grouped into "meals." We recommend the IDR continue and include: 1) continuing to do the Electronic Theses & Dissertations, 2) writing and implementing metadata and preservation policies and procedures, 3) taking the Excellent Undergraduate Research to the next level, and 4) continuing to implement DigiTool. There are quite a number of other options, but they may be deemed too expensive to implement.

Blackwell Synergy Based on Literatum Goes Live

Blackwell Publishing has released a new version of Blackwell Synergy, which utilizes Atypon’s Literatum software.

From the press release:

Blackwell Synergy enables its users to search 1 million articles from over 850 leading scholarly journals across the sciences, social sciences, humanities and medicine. The redesign provides easier navigation, faster loading times and improved access to tools for researchers, as well as meeting the latest accessibility standards (ADA section 508 and W3C’s WAI-AA).

Recently, the University of Chicago Press picked Atypon as a technology partner to provide an e-publishing platform for its online journals.

OCLC Openly Informatics Link Evaluator for Firefox

OCLC Openly Informatics has announced a free link checking plug-in for Firefox called Link Evaluator.

Here a brief description from the Link Evaluator page:

Link Evaluator is a Firefox extension designed to help users evaluate the availability of online resources linked to from a given Web page. When started, it automatically follows all links on the current page, and assesses the responses of each URL (link). . . .

After each link is checked, it is highlighted with a color based on the relative success of the result: green for fully successful, shades of yellow for partly successful, and red for unsuccessful.

It requires Mozilla Firefox version 1.5 (or later).

digitalculturebooks

The University of Michigan Press and the Scholarly Publishing Office of the University of Michigan Library, working together as the Michigan Digital Publishing Initiative, have established digitalculturebooks, which offers free access to digital versions of its published works (print works are fee-based). The imprint focuses on "the social, cultural, and political impact of new media."

The objectives of the imprint are to:

  • develop an open and participatory publishing model that adheres to the highest scholarly standards of review and documentation;
  • study the economics of Open Access publishing;
  • collect data about how reading habits and preferences vary across communities and genres;
  • build community around our content by fostering new modes of collaboration in which the traditional relationship between reader and writer breaks down in creative and productive ways.

Library Journal Academic Newswire notes in its article about digitalculturebooks:

While press officials use the term "open access," the venture is actually more "free access" than open at this stage. Open access typically does not require permission for reuse, only a proper attribution. UM director Phil Pochoda told the LJ Academic Newswire that, while no final decision has been made, the press’s "inclination is to ask authors to request the most restrictive Creative Commons license" for their projects. That license, he noted, requires attribution and would not permit commercial use, such as using it in a subsequent for-sale product, without permission. The Digital Culture Books web site currently reads that "permission must be received for any subsequent distribution."

The imprint’s first publication is The Best of Technology Writing 2006.

(Prior postings about digital presses.)

Has Authorama.com "Set Free" 100 Public Domain Books from Google Book Search?

In a posting on Google Blogoscoped, Philipp Lenssen has announced that he has put up 100 public domain books from Google Book Search on Authorama.

Regarding his action, Lenssen says:

In other words, Google imposes restrictions on these books which the public domain does not impose*. I’m no lawyer, and maybe Google can print whatever guidelines they want onto those books. . . and being no lawyer, most people won’t know if the guidelines are a polite request, or legally enforceable terms**. But as a proof of concept—the concept of the public domain—I’ve now ‘set free’ 100 books I downloaded from Google Book Search by republishing them on my public domain books site, Authorama. I’m not doing this out of disrespect for the Google Books program (which I think is cool, and I’ll credit Google on Authorama) but out of respect for the public domain (which I think is even cooler).

Since Lenssen has retained Google’s usage guidelines in the e-books, it’s unclear how they have been "set free," in spite of the following statement on Authorama’s Books from Google Book Search page:

The following books were downloaded from Google Book Search and are made available here as public domain. You can download, republish, mix and mash these books, for private or public, commercial or non-commercial use.

Leaving aside the above statement, Lenssen’s action appears to violate the following Google usage guideline, where Google asks that users:

Make non-commercial use of the files We designed Google Book Search for use by individuals, and we request that you use these files for personal, non-commercial purposes.

However, in the above guideline, Google uses the word "request," which suggests voluntary, rather than mandatory, compliance. Google also requests attribution and watermark retention.

Maintain attribution The Google ‘watermark’ you see on each file is essential for informing people about this project and helping them find additional materials through Google Book Search. Please do not remove it.

Note the use of the word "please."

It’s not clear how to determine if Google’s watermark remains in the Authorama files, but, given the retention of the usage guidelines, it likely does.

So, do Google’s public domain books really need to be "set free"? In its usage guidelines, Google appears to make compliance requests, not compliance requirements. Are such requests binding or not? If so, the language could be clearer. For example, here’s a possible rewording:

Make non-commercial use of the files Google Book Search is for individual use only, and its files can only be used for personal, non-commercial purposes. All other use is prohibited.

Landmark Digital Humanities Book Is Now Freely Available

A Companion to Digital Humanities is now freely available in digital form.

This important 2004 book was edited by Susan Schreibman, Ray Siemens, and John Unsworth. It includes chapters by such notable experts as Howard Besser, Greg Crane, Susan Hockey, Willard McCarty, Allen H. Renear, Abby Smith, C. M. Sperberg-McQueen, John Unsworth, and Perry Willett (to name just a few).

Scholarly Electronic Publishing Weblog Update (1/8/07)

The latest update of the Scholarly Electronic Publishing Weblog (SEPW) is now available, which provides information about new scholarly literature and resources related to scholarly electronic publishing, such as books, journal articles, magazine articles, newsletters, technical reports, and white papers. Especially interesting are: "Eliminating E-Reserves: One Library’s Experience," "Jean-Noël Jeanneney’s Critique of Google: Private Sector Book Digitization and Digital Library Policy," "Open Access in 2006," Our Cultural Commonwealth: The Final Report of the American Council of Learned Societies Commission on Cyberinfrastructure for the Humanities & Social Sciences, "The Research University and Scholarly Publishing: The View from a Provost’s Office," "Self-Archiving and the Copyright Transfer Agreements of ISI-ranked Library and Information Science Journals," "Using the Audit Checklist for the Certification of a Trusted Digital Repository as a Framework for Evaluating Repository Software Applications," and "Why Digital Asset Management? A Case Study."

For weekly updates about news articles, Weblog postings, and other resources related to digital culture (e.g., copyright, digital privacy, digital rights management, and Net neutrality), digital libraries, and scholarly electronic publishing, see the latest DigitalKoans Flashback posting.

Bad Juju: Zombies and Botnets

You may not know it, but your home computer could be under a serious attack from botnets populated by zombie computers, and that spells trouble for your personal data.

According to a New York Times article ("Attack of the Zombie Computers Is Growing Threat") ShadowServer is "now tracking more than 400,000 infected machines and about 1,450 separate I.R.C. control systems, which are called Command & Control servers." Moreover, it states that:

Computer security experts warn that botnet programs are evolving faster than security firms can respond and have now come to represent a fundamental threat to the viability of the commercial Internet. The problem is being compounded, they say, because many Internet service providers are either ignoring or minimizing the problem.

The New York Times piece offers some general advice about how to protect your computer. I’ll give you some quick specifics for PCs, using free programs.

First, let’s see how exposed your computer is to the Net. Go to Shields Up!, click on "Proceed" at the bottom of the page, click on "File Sharing," then click on "All Service Ports." If your computer, doesn’t pass these tests you’ll want to take remedial action.

Second, if you don’t have a software firewall, download and install the free version of Zone Alarm. Under "Firewall," set "Internet Zone Security" to "High."

Third, if you don’t have antivirus software, download and install AVG Anti-Virus Free Edition. Scan for viruses.

Fourth, if you don’t have antispyware software, download and install Ad-Aware SE Personal. Scan for spyware. Update and run it periodically.

Fifth (for DSL/cable users), if you really want to be safe and you don’t have a hardware firewall, buy one and disable the IRC ports: 194 and 6660-7000.

Wasn’t that fun? Now, run Shields Up! again. Hopefully, all is well. If not, tweak.

Keep in mind that free program versions lack features of paid ones. Also keep in mind that suite programs that you pay for often offer variable protection for various functions, and, while a single program may cover all functions, you may be better off mixing and matching single-function programs that are very highly rated by PC Magazine, PC World, and similar publications, keeping in mind that programs from different vendors can interfere with each other and experimentation may be needed to find the right mix.

Source: Markoff, John. "Attack of the Zombie Computers Is Growing Threat." The New York Times, 7 January 2006, 1, 16.

Will Self-Archiving Cause Libraries to Cancel Journal Subscriptions?

There has been a great deal of discussion of late about the impact of self-archiving on library journal subscriptions. Obviously, this is of great interest to journal publishers who do not want to wake up one morning, rub the sleep from their eyes, and find out over their first cup of coffee at work that libraries have en masse canceled subscriptions because a "tipping point" has been reached. Likewise, open access advocates do not want journal publishers to panic at the prospect of cancellations and try to turn back the clock on liberal self-archiving policies. So, this is not a scenario that any one wants, except those who would like to simply scrap the existing journal publishing system and start over with a digital tabula rosa.

So, deep breath: Is the end near?

This question hinges on another: Will libraries accept any substitute for a journal that does not provide access to the full, edited, and peer-reviewed contents of that journal?

If the answer is "yes," publishers better get out their survival kits and hunker down for the digital nuclear winter or else change business practices to embrace the new reality. Attempts to fight back by rolling back the clock may just make the situation worse: the genie is out of the bottle.

If the answer is "no," preprints pose no threat, but postprints may under some difficult to attain circumstances.

It is unlikely that a critical mass of author created postprints (i.e., author makes the preprint look like the postprint) will ever emerge. Authors would have to be extremely motivated to have this occur. If you don’t believe me, take a Word file that you submitted to a publisher and make it look exactly like the published article (don’t forget the pagination because that might be a sticking point for libraries). That leaves publisher postprints (generally PDF files).

For the worst to happen, every author of every paper published in a journal would have to self-archive the final publisher PDF file (or the publishers themselves would have to do it for the authors under mandates).

But would that be enough? Wouldn’t the permanence and stability of the digital repositories housing these postprints be of significant concern to libraries? If such repositories could not be trusted, then libraries would have to attempt to archive the postprints in question themselves; however, since postprints are not by default under copyright terms that would allow this to happen (e.g., they are not under Creative Commons Licenses), libraries may be barred from doing so. There are other issues as well: journal and issue browsing capabilities, the value-added services of indexing and abstracting services, and so on. For now, let’s wave our hands briskly and say that these are all tractable issues.

If the above problems were overcome, a significant one remains: publishers add value in many ways to scholarly articles. Would libraries let the existing system of journal publishing collapse because of self-archiving without a viable substitute for these value-added functions being in place?

There have been proposals for and experiments with overlay journals for some time, as well other ideas for new quality control strategies, but, to date, none have caught fire. Old-fashioned peer review, copy editing and fact checking, and publisher-based journal design and production still reign, even among the vast majority of e-journals that are not published by conventional publishers. In the Internet age, nothing technological stops tens of thousands of new e-journals using open source journal management software from blooming, but they haven’t so far, have they? Rather, if you use a liberal definition of open access, there are about 2,500 OA journals—a significant achievement; however, there are questions about the longevity of such journals if they are published by small non-conventional publishers such as groups of scholars (e.g., see "Free Electronic Refereed Journals: Getting Past the Arc of Enthusiasm"). Let’s face it—producing a journal is a lot of work, even a small journal that only publishes less than a hundred papers a year.

Bottom line: a perfect storm is not impossible, but it is unlikely.

Journal 2.0: PLoS ONE Beta Goes Live

The Public Library of Science has released a beta version of its innovative PLoS ONE journal.

Why innovative? First, it’s a multidisciplinary scientific journal, with published articles covering subjects that range from Biochemistry to Virology. Second, it’s a participative journal that allows registered users to annotate and initiate discussions about articles. Open commentary and peer-review have been previously implemented in some e-journals (e.g, see JIME: An Interactive Journal for Interactive Media), but PLoS ONE is the most visible of these efforts and, given PLoS’s reputation for excellence, it lends credibility to a concept that has yet to catch fire in the journal publishing world. A nice feature is the “Most Annotated” tab on the home page that highlights articles that have garnered reader commentary. Third, it’s an open access journal in the full sense of the term, with all articles under the least restrictive Creative Commons license, the Creative Commons Attribution License.

The beta site is a bit slow, probably due to significant interest, so expect some initial browsing delays.

Congratulations to PLoS on PLoS ONE. It’s journal worth keeping an eye on.

Certifying Digital Repositories: DINI Draft

The Electronic Publishing Working Group of the Deutsche Initiative für Netzwerkinformation (DINI) has released an English draft of its DINI-Certificate Document and Publication Services 2007.

It outlines criteria for repository author support; indexing; legal aspects; long-term availability; logs and statistics; policies; security, authenticity and data integrity; and service visibility. It also provides examples.

JSTOR to Offer Purchase of Articles by Individuals

Using a new purchase service, individuals will be able to purchase JSTOR articles for modest fees (currently $5.00 to $14.00) from publishers that participate in this service. JSTOR describes the service as follows:

An extension of JSTOR’s efforts to better serve scholars is a new article purchase service. This service is an opt-in program for JSTOR participating publishers and will enable them to sell single articles for immediate download. Researchers following direct links to articles will be presented with an option to purchase an article from the publisher if the publisher chooses to participate in this program and if the specific content requested is available through the program. The purchase option will only be presented if the user does not have access to the article. Prior to completing an article purchase users are prompted to first check the availability of the article through a local library or an institutional affiliation with JSTOR.

INASP Journals to Be Included in CrossRef

International Network for the Availability of Scientific Publications (INASP) has announced that its journals will be included in the CrossRef linking service.

In an INASP press release, Pippa Smart, INASP’s Head of Publishing Initiatives, said:

For journals that are largely invisible to most of the scientific community the importance of linking cannot be overstressed. We are therefore delighted to be working with CrossRef to promote discovery of journals published in the less developed countries. We believe that an integrated discovery mechanism which includes journals from all parts of the world is vital to global research—not only benefiting the editors and publishers with whom we work.

You Better Be Good, You Better Not Copy

The Wall Street Journal reports that Attributor Corp "has begun testing a system to scan the billions of pages on the Web for clients’ audio, video, images and text—potentially making it easier for owners to request that Web sites take content down or provide payment for its use."

The company will use specialized digital fingerprinting technology in its copy detection service, which will become available in the first quarter of 2007. By the end of December, it will have about 10 billion Web pages in its detection index.

An existing competing service, Copyscape, offers both free and paid copy detection.

Source: Delaney, Kevin J. "Copyright Tool Will Scan Web For Violations." The Wall Street Journal, 18 December 2006, B1.

Hear Luminaries Interviewed at the 2006 Fall CNI Task Force Meeting

Matt Pasiewicz and CNI have made available digital audio interviews with a number of prominent attendees at the 2006 Fall CNI Task Force Meeting. Selected interviews are below. More are available on Pasiewicz’s blog.

Version 66, Scholarly Electronic Publishing Bibliography

Version 66 of the Scholarly Electronic Publishing Bibliography is now available. This selective bibliography presents over 2,830 articles, books, and other printed and electronic sources that are useful in understanding scholarly electronic publishing efforts on the Internet.

The SEPB URL has changed:

http://sepb.digital-scholarship.org/

or http://www.digital-scholarship.org/sepb/sepb.html

There is a mirror site at:

http://www.digital-scholarship.com/sepb/sepb.html

The Scholarly Electronic Publishing Weblog URL has also changed:

http://sepw.digital-scholarship.org/

or http://www.digital-scholarship.org/sepb/sepw/sepw.htm

There is a mirror site at:

http://www.digital-scholarship.com/sepb/sepw/sepw.htm

The SEPW RSS feed is unaffected.

Changes in This Version

The bibliography has the following sections (revised sections are marked with an asterisk):

Table of Contents

1 Economic Issues*
2 Electronic Books and Texts
2.1 Case Studies and History*
2.2 General Works*
2.3 Library Issues*
3 Electronic Serials
3.1 Case Studies and History*
3.2 Critiques
3.3 Electronic Distribution of Printed Journals*
3.4 General Works*
3.5 Library Issues*
3.6 Research*
4 General Works*
5 Legal Issues
5.1 Intellectual Property Rights*
5.2 License Agreements*
6 Library Issues
6.1 Cataloging, Identifiers, Linking, and Metadata*
6.2 Digital Libraries*
6.3 General Works*
6.4 Information Integrity and Preservation*
7 New Publishing Models*
8 Publisher Issues*
8.1 Digital Rights Management*
9 Repositories, E-Prints, and OAI*
Appendix A. Related Bibliographies
Appendix B. About the Author*
Appendix C. SEPB Use Statistics

Scholarly Electronic Publishing Resources includes the following sections:

Cataloging, Identifiers, Linking, and Metadata
Digital Libraries*
Electronic Books and Texts*
Electronic Serials
General Electronic Publishing
Images
Legal
Preservation
Publishers
Repositories, E-Prints, and OAI*
SGML and Related Standards

Further Information about SEPB

The HTML version of SEPB is designed for interactive use. Each major section is a separate file. There are links to sources that are freely available on the Internet. It can be searched using a Google Search Engine. Whether the search results are current depends on Google’s indexing frequency.

In addition to the bibliography, the HTML document includes:

(1) Scholarly Electronic Publishing Weblog (biweekly list of new resources; also available by e-mail—see second URL—and RSS Feed—see third URL)

http://sepw.digital-scholarship.org/
http://www.feedburner.com/fb/a/emailverifySubmit?feedId=51756
http://feeds.feedburner.com/ScholarlyElectronicPublishingWeblogrss

(2) Scholarly Electronic Publishing Resources (directory of over 270 related Web sites)

http://sepr.digital-scholarship.org/

(3) Archive (prior versions of the bibliography)

http://www.digital-scholarship.org/sepb/archive/sepa.htm

The 2005 annual PDF file is designed for printing. The printed bibliography is over 210 pages long. The PDF file is over 560 KB.

http://www.digital-scholarship.org/sepb/archive/60/sepb.pdf

Related Article

An article about the bibliography has been published in The Journal of Electronic Publishing:

http://www.press.umich.edu/jep/07-02/bailey.html

Scholarly Electronic Publishing Weblog Update (12/18/06)

The latest update of the Scholarly Electronic Publishing Weblog (SEPW) is now available, which provides information about new scholarly literature and resources related to scholarly electronic publishing, such as books, journal articles, magazine articles, newsletters, technical reports, and white papers. Especially interesting are: The Complete Copyright Liability Handbook for Librarians and Educators, "Copyright Concerns in Online Education: What Students Need to Know," Digital Archiving: From Fragmentation to Collaboration, "Fixing Fair Use," "Mass Digitization of Books," MLA Task Force on Evaluating Scholarship for Tenure and Promotion, "Open Access: Why Should We Have It?," "Predictions for 2007," "Readers’ Attitudes to Self-Archiving in the UK," "The Rejection of D-Space: Selecting Theses Database Software at the University of Calgary Archives," "Taming the Digital Beast," and Understanding Knowledge as a Commons: From Theory to Practice.

The SEPW URL has changed. Use:

http://sepw.digital-scholarship.org/

or http://www.digital-scholarship.org/sepb/sepw/sepw.htm

There is a mirror site at:

http://www.digital-scholarship.com/sepb/sepw/sepw.htm

The RSS feed is unaffected.

For weekly updates about news articles, Weblog postings, and other resources related to digital culture (e.g., copyright, digital privacy, digital rights management, and Net neutrality), digital libraries, and scholarly electronic publishing, see the latest DigitalKoans Flashback posting.