New Pew Report: Future of the Internet III

The Pew Internet & American Life Project has released Future of the Internet III.

Here’s an excerpt from the announcement:

Here are the key findings on the survey of experts by the Pew Internet & American Life Project that asked respondents to assess predictions about technology and its roles in the year 2020:

  • The mobile device will be the primary connection tool to the internet for most people in the world in 2020.
  • The transparency of people and organizations will increase, but that will not necessarily yield more personal integrity, social tolerance, or forgiveness.
  • Voice recognition and touch user-interfaces with the internet will be more prevalent and accepted by 2020.
  • Those working to enforce intellectual property law and copyright protection will remain in a continuing arms race, with the crackers who will find ways to copy and share content without payment.
  • The divisions between personal time and work time and between physical and virtual reality will be further erased for everyone who is connected, and the results will be mixed in their impact on basic social relations.
  • Next-generation engineering of the network to improve the current internet architecture is more likely than an effort to rebuild the architecture from scratch.

What's a Fast Wide Area Network Data Transfer? Now, It's 114 Gigabits per Second

At SuperComputing 2008, an international team headed by California Institute of Technology researchers demonstrated wide area network, multiple-country data transfers that peaked at 114 gigabits per second and sustained a 110 gigabit per second rate.

Read more about it at "High Energy Physics Team Sets New Data-Transfer World Records."

New Book from EDUCAUSE: The Tower and the Cloud

EDUCAUSE has published a new book, The Tower and the Cloud, which is freely available in digital form (a print version is also available).

The book is a wide-ranging overview of major information technology trends and their impacts on higher education, with essays written by prominent authors such as Clifford A. Lynch ("A Matter of Mission: Information Technology and the Future of Higher Education"), Paul N. Courant ("Scholarship: The Wave of the Future in the Digital Age"), and John Unsworth ("University 2.0").

ARL SPEC Kit: Social Software in Libraries

The Association of Research Libraries has published Social Software in Libraries, SPEC Kit 304. The table of contents and executive summary are freely available.

Here's an excerpt from the press release:

This survey was distributed to the 123 ARL member libraries in February 2008. Sixty-four libraries completed the survey by the March 14 deadline for a response rate of 52%. All but three of the responding libraries report that their library staff uses social software (95%) and one of those three plans to begin using social software in the future.

Survey results indicate that the most broadly adopted social software—chat or instant messaging—was also the earliest implemented social software. While one respondent was using instant messaging for reference and another was using chat for internal communication as early as 1998, the earliest use of this type of social software dates back to 1993.

While chat and instant messaging have been in use for several years, use of other types of social software in libraries is very recent. Beyond isolated cases, a steadily increasing number of ARL member libraries began implementing social software in 2005, with the largest rate of adoption being in 2007.

E Ink to Hit the Newsstand: Esquire Will Use It for Magazine Cover

The October issue of Esquire will have an E Ink cover powered by a small battery.

Here's an excerpt from the press release.

Esquire, one of America’s iconic magazines, is turning 75 this year. As part of the celebration of this milestone, the October issue will be the first magazine ever to embed a revolutionary digital technology—electronic paper—into a mass-produced print product.

In partnership with the all-new Ford Flex Crossover and in collaboration with E Ink Corporation, the world's leading supplier of electronic paper display (EPD) technologies, Esquire’s groundbreaking cover will make a profound statement about how the print medium can expand its capabilities while continuing to exploit its own unique strengths. Ford will prominently feature its highly-anticipated Ford Flex on the inside cover, utilizing the same E Ink VizplexTM flexible display technology, in a double-page advertisement.

"This cover is both a breakthrough for magazines and an expression of the theme of our anniversary issue," said David Granger, editor-in-chief, Esquire. "We’ve spent 16 months making this happen as one of the ways we’re demonstrating that the 21st century begins this fall. The entire issue is devoted to exploring the ideas, people and issues that will be the foundation of the 21st century. . . ."

Esquire will distribute 100,000 issues with the special cover on newsstands. They will be available at Borders, Barnes & Noble and select newsstand vendors.

Read more about it at "News Flash From the Cover of Esquire: Paper Magazines Can Be High Tech, Too."

Verizon Wants to Improve Peer-to-Peer File Sharing Performance with P4P

As other ISPs try to reduce and shape P2P traffic, Verizon has taken a different tack: investigating how to improve throughput with the new Proactive network Provider Participation for P2P (P4P) protocol. In tests with file sharing company Pando, use of P4P boosted performance between 200 and 600 percent.

Read more about it at: "Goodbye, P2P! P4P is Coming" "Verizon Embraces P4P, a More Efficient Peer-to-Peer Tech" and "With Eyes Open, Verizon Peers into the Future."

RAD Lab: Cloud Computing Made Easy

The RAD Lab (Reliable Adaptive Distributed Systems Laboratory) is working to "enable one person to invent and run the next revolutionary IT service, operationally expressing a new business idea as a multi-million-user service over the course of a long weekend."

Read more about it at "RAD Lab Technical Vision" and "Trying to Figure Out How to Put a Google in Every Data Center."

There's a 20% Chance That You Are a Digital Simulation Living in a Virtual World

Nick Bostrom, Director of the Future of Humanity Institute at Oxford, says in a New York Times article today:

“My gut feeling, and it’s nothing more than that,” he says, “is that there’s a 20 percent chance we’re living in a computer simulation.”

Bostrom thinks so because, barring a future prohibition on creating simulated worlds or disinterest in doing so, that our posthuman descendants are almost certain to create simulations of the past. The more simulations that are run, the more likely that you are in one.

By some estimates, there will be enough available computing power to create a simulated world by 2050.

However, there could be a recursive problem:

It’s also possible that there would be logistical problems in creating layer upon layer of simulations. There might not be enough computing power to continue the simulation if billions of inhabitants of a virtual world started creating their own virtual worlds with billions of inhabitants apiece.

I wouldn't count on it though.

Source: Tierney, John. "Our Lives, Controlled From Some Guy's Couch." The New York Times, 14 August 2007, D1, D4.

Second Life Impacts Real Life and Vice Versa

What happens in Second Life is increasingly influencing real life and vice versa. Here are some recent highlights:

Turning the Pages on an E-Book—Realistic Electronic Books

In this June 26th Google Tech Talk video titled Turning the Pages on an E-Book—Realistic Electronic Books, Veronica Liesaputra, PhD candidate at the University of Waikato, discusses her research on realistic e-books.

Here’s an excerpt from the presentation’s abstract:

In this talk, I will describe and demo a lightweight realistic book implementation that allows a document to be automatically presented with quick and easy-to-use animated page turning, while still providing readers with many advantages of electronic documents, such as hyperlinks and multimedia. I will also review computer graphics models for page-turning, from complex physical models based on the finite element method through 3D geometric models to simple "flatland" models involving reflection and rotation—which is what the demo uses.

Introducing the Networked Print Book

if:book reports that Manolis Kelaidis made a big splash at the O’Reilly Tools of Change for Publishing conference with his networked paper book.

Here’s a an excerpt from the posting:

Manolis Kelaidis, a designer at the Royal College of Art in London, has found a way to make printed pages digitally interactive. His "blueBook" prototype is a paper book with circuits embedded in each page and with text printed with conductive ink. When you touch a "linked" word on the page and your finger completes a circuit, sending a signal to a processor in the back cover which communicates by Bluetooth with a nearby computer, bringing up information on the screen.

Here’s an excerpt from a jusTaText posting about the demo:

Yes, he had a printed and bound book which communicated with his laptop. He simply touched the page, and the laptop reacted. It brought up pictures of the Mona Lisa. It translated Chinese. It played a piece of music. Kelaidis suggested that a library of such books might cross-refer, i.e. touching a section in one book might change the colors of the spines of related books on your shelves. Imagine.

POD for Library Users: New York Public Library Tries Espresso Book Machine

The New York Public Library’s Science, Industry, and Business Library has installed an Espresso Book Machine for public use through August.

Here’s an excerpt from the press release:

The first Espresso Book Machine™ ("the EBM") was installed and demonstrated today at the New York Public Library’s Science, Industry, and Business Library (SIBL). The patented automatic book making machine will revolutionize publishing by printing and delivering physical books within minutes. The EBM is a product of On Demand Books, LLC ("ODB"—www.ondemandbooks.com). . .

The Espresso Book Machine will be available to the public at SIBL through August, and will operate Monday-Saturday from 1 p.m. to 5 p.m. . . .

Library users will have the opportunity to print free copies of such public domain classics as "The Adventures of Tom Sawyer" by Mark Twain, "Moby Dick" by Herman Melville, "A Christmas Carol" by Charles Dickens and "Songs of Innocence" by William Blake, as well as appropriately themed in-copyright titles as Chris Anderson’s "The Long Tail" and Jason Epstein’s own "Book Business." The public domain titles were provided by the Open Content Alliance ("OCA"), a non-profit organization with a database of over 200,000 titles. The OCA and ODB are working closely to offer this digital content free of charge to libraries across the country. Both organizations have received partial funding from the Alfred P. Sloan Foundation. . . .

The EBM’s proprietary software transmits a digital file to the book machine, which automatically prints, binds, and trims the reader’s selection within minutes as a single, library-quality, paperback book, indistinguishable from the factory-made title.

Unlike existing print on demand technology, EBM’s are fully integrated, automatic machines that require minimal human intervention. They do not require a factory setting and are small enough to fit in a retail store or small library room. While traditional factory based print on demand machines usually cost over $1,000,000 per unit, the EBM is priced to be affordable for retailers and libraries. . . .

Additional EBM’s will be installed this fall at the New Orleans Public Library, the University of Alberta (Canada) campus bookstore, the Northshire Bookstore in Manchester, Vermont, and at the Open Content Alliance in San Francisco. Beta versions of the EBM are already in operation at the World Bank Infoshop in Washington, DC and the Bibliotheca Alexandrina (The Library of Alexandria, Egypt). National book retailers and hotel chains are among the companies in talks with ODB about ordering EBM’s in quantity.

Rome Reborn 1.0

A cross-institutional team has built a a simulation of Rome as it was in 320 A.D. called Rome Reborn 1.0.

Here’s an excerpt from the press release:

Rome’s Mayor Walter Veltroni will officiate at the first public viewing of "Rome Reborn 1.0," a 10-year project based at the University of Virginia and begun at the University of California, Los Angeles (UCLA) to use advanced technology to digitally rebuild ancient Rome. The event will take place at 2 p.m. in the Palazzo Senatorio on the Campidoglio. An international team of archaeologists, architects and computer specialists from Italy, the United States, Britain and Germany employed the same high-tech tools used for simulating contemporary cities such as laser scanners and virtual reality to build the biggest, most complete simulation of an historic city ever created. "Rome Reborn 1.0" shows almost the entire city within the 13-mile-long Aurelian Walls as it appeared in A.D. 320. At that time Rome was the multicultural capital of the western world and had reached the peak of its development with an estimated population of one million.

"Rome Reborn 1.0" is a true 3D model that runs in real time. Users can navigate through the model with complete freedom, moving up, down, left and right at will. They can enter important public buildings such as the Roman Senate House, the Colosseum, or the Temple of Venus and Rome, the ancient city’s largest place of worship.

As new discoveries are made, "Rome Reborn 1.0" can be easily updated to reflect the latest knowledge about the ancient city. In future releases, the "Rome Reborn" project will include other phases in the evolution of the city from the late Bronze Age in the 10th century B.C. to the Gothic Wars in the 6th century A.D. Video clips and still images of "Rome Reborn 1.0" can be viewed at www.romereborn.virginia.edu. . . .

The "Rome Reborn" project was begun at UCLA in 1996 by professors Favro and Frischer. They collaborated with UCLA students from classics, architecture and urban design who fashioned the digital models with continuous advice from expert archaeologists. As the project evolved, it became collaborative at an international scale. In 2004, the project moved its administrative home to the University of Virginia, while work in progress continued at UCLA. In the same year, a cooperative research agreement was signed with the Politecnico di Milano. . . .

Many individuals and institutions contributed to "Rome Reborn" including the Politecnico di Milano (http://www.polimi.it), UCLA (http://www.etc.ucla.edu/), and the University of Virginia (www.iath.virginia.edu). The advisors of the project included scholars from the Italian Ministry of Culture, the Museum of Roman Civilization (Rome), Bath University, Bryn Mawr College, the Consiglio Nazionale delle Ricerche, the German Archaeological Institute, Ohio University, UCLA, the University of Florence, the University of Lecce, the University of Rome ("La Sapienza"), the University of Virginia and the Vatican Museums.

The IBM Gameframe

If you thought the era of big iron was dead, think again.

According to the New York Times, IBM is rolling out a "gameframe" that is "capable of permitting hundreds of thousands of computer users to interact in a three-dimensional simulated on-screen world described as a ‘metaverse.’"

Meanwhile, Sun is rolling out a video server that is "potentially powerful enough to transmit different standard video streams simultaneously to everyone watching TV in a city the size of New York."

Source: Markoff, John. "Sun and I.B.M. to Offer New Class of High-End Servers." The New York Times, 26 April 2006, C10.

Scholarly Journal Podcasts

In a recent SSP-L message, Mark Johnson, Journal Manager of HighWire Press, identified three journals that offer podcasts or digital audio files:

Here are a few others:

LITA Next Generation Catalog Interest Group

LITA has formed the Next Generation Catalog Interest Group. There is also an associated mailing list.

Here is an excerpt from the LITA-L announcement:

NGCIG gives LITA a discussion space devoted to developments in the library catalog, its nature and scope, and its interfaces. It provides a forum for presentations and sharing of innovation in catalogs and discussion of future directions. Collaborations with other LITA interest groups, such as in the areas of emerging technologies and open source software, will provide opportunities for programming.

The Long Run

Enthusiasm about new technologies is essential to innovation. There needs to be some fire in the belly of change agents or nothing ever changes. Moreover, the new is always more interesting than the old, which creaks with familiarity. Consequently, when an exciting new idea seizes the imagination of innovators and, later, early adopters (using Rogers’ diffusion of innovations jargon), it is only to be expected that the initial rush of enthusiasm can sometimes dim the cold eye of critical analysis.

Let’s pick on Library 2.0 to illustrate the point, and, in particular, librarian-contributed content instead of user-contributed content. It’s an idea that I find quite appealing, but let’s set that aside for the moment.

Overcoming the technical challenges involved, academic library X sets up on-demand blogs and wikis for staff as both outreach and internal communication tools. There is an initial frenzy of activity, and a number of blogs and wikis are established. Subject specialists start blogging. Perhaps the pace is reasonable for most to begin with, although some fall by the wayside quickly, but over time, with a few exceptions, the postings become more erratic and the time between postings increases. It is unclear whether target faculty read the blogs in any great numbers. Internal blogs follow a similar pattern. Some wikis, both internal and external, are quickly populated, but then become frozen by inactivity; others remain blank; others flourish because they serve a vital need.

Is this a story of success, failure, or the grey zone in between?

The point is this. Successful publishing in new media such as blogs and wikis requires that these tools serve a real purpose and that their contributors make a consistent, steady, and never-ending effort. It also requires that the intended audience understand and regularly use the tools and that, until these new communication channels are well-established, the library vigorously promote them because there is a real danger that, if you build it, they will not come.

Some staff will blog their hearts out irregardless of external reinforcement, but many will need to have their work acknowledged in some meaningful way, such as at evaluation, promotion, and tenure decision points. Easily understandable feedback about tool use, such as good blog-specific or wiki-specific log analysis, is important as well to give writers the sense that they are being read and to help them tailor their message to their audience.

On the user side, it does little good to say "Here’s my RSS feed" to a faculty member who doesn’t know what RSS is and could care less. Of course, some will be hip to RSS, but that may not be the majority. If the library wants RSS feeds to become part of a faculty member’s daily workflow, it is going to have to give that faculty member a good reason for it to be so, such as significant, identified RSS feed content in the faculty member’s field. Then, it is going to have to help the faculty member with the RSS transition by pointing out good RSS readers, providing tactful instruction, and offering ongoing assistance.

In spite of the feel-good glow of early success, it may be prudent not to declare victory too soon after making the leap into a major new technology. It’s a real accomplishment, but dealing with technical puzzles is often not the hardest part. The world of computers and code is a relatively ordered and predictable one; the world of humans is far more complex and unpredictable.

The real test of a new technology is in the long run: Is the innovation needed, viable, and sustainable? Major new technologies often require significant ongoing organizational commitments and a willingness to measure success and failure with objectivity and to take corrective action as required. For participative technologies such as Library 2.0 and institutional repositories, it requires motivating users as well as staff to make behavioral changes that persist long after the excitement of the new wears off.

Digital Preservation via Emulation at Koninklijke Bibliotheek

In a two-year (2005-2007) joint project with Nationaal Archief of the Netherlands, Koninklijke Bibliotheek is developing an emulation system that will allow digital objects in outmoded formats to be utilized in their original form. Regarding the emulation approach, the Koninklijke Bibliotheek says:

Emulation is difficult, the main reason why it is not applied on a large scale. Developing an emulator is complex and time-consuming, especially because the emulated environment must appear authentic en must function accurately as well. When future users are interested in the contents of a file, migration remains the better option. When it is the authentic look and feel and functionality of a file they are after, emulation is worth the effort. This can be the case for PDF documents or websites. For multimedia applications, emulation is in fact the only suitable permanent access strategy.

J. R. van der en Wijngaarden Hoeven’s paper "Modular Emulation as a Long-Term Preservation Strategy for Digital Objects" provides a overview of the emulation approach.

In a related development, a message to padiforum-l on 11/17/06 by Remco Verdegem of the Nationaal Archief of the Netherlands reported on a recent Emulation Expert Meeting, which issued a statement noting the following advantages of emulation for digital preservation purposes:

  • It preserves and permits access to each digital artifact in its original form and format; it may be the only viable approach to preserving digital artifacts that have significant executable and/or interactive behavior.
  • It can preserve digital artifacts of any form or format by saving the original software environments that were used to render those artifacts. A single emulator can preserve artifacts in a vast range of arbitrary formats without the need to understand those formats, and it can preserve huge corpuses without ever requiring conversion or any other processing of individual artifacts.
  • It enables the future generation of surrogate versions of digital artifacts directly from their original forms, thereby avoiding the cumulative corruption that would result from generating each such future surrogate from the previous one.
  • If all emulators are written to run on a stable, thoroughly-specified "emulation virtual machine" (EVM) platform and that virtual machine can be implemented on any future computer, then all emulators can be run indefinitely.

OAI’s Object Reuse and Exchange Initiative

The Open Archives Initiative has announced its Object Reuse and Exchange (ORE) initiative:

Object Reuse and Exchange (ORE) will develop specifications that allow distributed repositories to exchange information about their constituent digital objects. These specifications will include approaches for representing digital objects and repository services that facilitate access and ingest of these representations. The specifications will enable a new generation of cross-repository services that leverage the intrinsic value of digital objects beyond the borders of hosting repositories. . . . its real importance lies in the potential for these distributed repositories and their contained objects to act as the foundation of a new digitally-based scholarly communication framework. Such a framework would permit fluid reuse, refactoring, and aggregation of scholarly digital objects and their constituent parts—including text, images, data, and software. This framework would include new forms of citation, allow the creation of virtual collections of objects regardless of their location, and facilitate new workflows that add value to scholarly objects by distributed registration, certification, peer review, and preservation services. Although scholarly communication is the motivating application, we imagine that the specifications developed by ORE may extend to other domains.

OAI-ORE is being funded my the Andrew W. Mellon Foundation for a two-year period.

Presentations from the Augmenting Interoperability across Scholarly Repositories meeting are a good source of further information about the thinking behind the initiative as is the "Pathways: Augmenting Interoperability across Scholarly Repositories" preprint.

Forget RL, Try an Avatar Instead

Real life (RL) is so 20th century. Virtual worlds are where it’s at. At least, that’s what readers of BusinessWeek‘s recent "My Virtual Life" article by Robert D. Hof may quickly come to believe.

You may think that virtual worlds are just kids stuff. Tell that to Anshe Chung, who has made over $250,000 buying and renting virtual real estate in Linden Lab’s Second Life. Or, Chris Mead, whose Second Life couples avatars earn him a cool $90,000 per year. Or the roughly 170,000 Second Life users who spent about $5 million real dollars on virtual stuff in January 2006.

How about this? For all virtual worlds, IGE Ltd. estimates that users spend over $1 billion real dollars on virtual stuff last year.

While most users may be buying virtual clothes, land, and entertainment and other services, conventional companies are exploring how to use virtual worlds for training, meeting, and other purposes, plus trying to snag regular users’ interest with offerings such as Well’s Fargo’s Stagecoach Island.

For the library slant on Second Life, try the Second Life Library 2.0 blog and don’t miss the Alliance Second Life Library 2.0 introduction on 5/31/06 from 2:00 PM-3:30 PM. And don’t foget to browse the Second Life Library 2.0 image pool at Flickr.

Oh, brave new world that has such avatars in it!

Source: Hof, Robert D. "My Virtual Life." BusinessWeek, 1 May 2006, 72-82.

Microsoft’s Windows Live Academic Search

Microsoft will be releasing Windows Live Academic Search shortly (I was recently told Wednesday; the blog buzz is saying tomorrow).

As is typical with such software projects, the team is doing some last minute tweaking before release. So, I won’t try to describe the system in any detail at this point, except to say that it integrates access to published articles with e-prints and other open access materials, it provides a reference export capability, there’s a cool optional two-pane view (short bibliographic information on the left; full bibliographic information and abstract on the right), and it supports search "macros" (user-written search programs).

What I will say is this: Microsoft made a real effort to get significant, honest input from the librarian and publisher communities during the development process. I know, because, now that the nondisclosure agreement has been lifted, I can say that I was one of the librarians who provided such input on an unpaid basis. I was very impressed by how carefully the development team listened to what we had to say, how sharp and energetic they were, how they really got the Web 2.0 concept, and how deeply committed they were to creating the best product possible. Having read Microserfs, I had a very different mental picture of Microsoft than the reality I encountered.

Needless to say, there were lively exchanges of views between librarians and publishers when open access issues were touched upon. My impression is that the team listened to both sides and tried to find the happy middle ground.

When it’s released, Windows Live Academic Search won’t be the perfect answer to your open access search engine dreams (what system is?), and Microsoft knows that there are missing pieces. But I think it will give Google Scholar a run for its money. I, for one, heartily welcome it, and I think it’s a good base to build upon, especially if Microsoft continues to solicit and seriously consider candid feedback from the library and publisher communities (and it appears that it will).

Customers Welcome RFID-Enabled Cards. . . with Hammers and Microwave Ovens

The Wall Street Journal reports that customers lack enthusiasm for RFID credit cards due to privacy and fraud concerns. In fact, they are devising novel ways to disable RFID chips, including using hammers and microwave ovens to smash or fry them. FoeBud, a German digital rights group, sells a variety of devices to detect or disable the chips. Sensing a hot market, some companies have joined the bandwagon with new products (e.g., RFIDwasher) that do the job more safely than a microwave oven, which can be a fire hazard when used for RFID frying. For those who don’t want to tamper with their RFID cards, they can buy shielded wallets and passport cases from DIFRWEAR that block signals when closed.

As libraries begin to embrace RFID technology, these concerns from the credit card sector may be worth watching, and it may give them pause.

Source: Warren, Susan. "Why Some People Put These Credit Cards in the Microwave." The Wall Street Journal, 10 April, 2006, A1, A16.

Bar Code 2.0

Now you can store a 20-second video that can be viewed on a cell phone in a colored bar code the size of a postage stamp. Or, if the cell phone is connected to the Internet, use the bar code to launch a URL. The user snaps a picture of the bar code to utilize it. Content Idea of Asia invested this new bar code technology, which can store 600 KB, and plans to offer it later this year.

Source: Hall, Kenji. "The Bar Code Learns Some Snazzy New Tricks." BusinessWeek, 3 April 2006, 113.

Gary Flake’s "Internet Singularity"

Dr. Gary William Flake, Microsoft technical fellow, gave a compelling and lively presentation at SearchChamps V4 entitled "How I Learned to Stop Worrying and Love the Imminent Internet Singularity."

Flake’s "Internet Singularity," is "the idea that a deeper and tighter coupling between the online and offline worlds will accelerate science, business, society, and self-actualization."

His PowerPoint presentation is text heavy enough that you should be able to follow his argument fairly well. (Ironically, he had apparently received some friendly criticism from colleagues about the very wordiness of the PowerPoint that allows it to stand alone.)

I’m not going to try to recap his presentation here. Rather, I urge you to read it, and I’ll discuss a missing factor from his model that may, to some extent, act as a brake on the type of synergistic technical progress that he envisions.

That factor is the equally accelerating growth of what Lawrence Lessig calls the "permission culture," which is "a culture in which creators get to create only with the permission of the powerful, or of creators from the past."

Lessig discusses this topic with exceptional clarity in his book Free Culture: How Big Media Uses Technology and the Law to Lock Down Culture and Control Creativity (HTML, PDF, or printed book; Lessig’s book in under an Attribution-NonCommercial 1.0 License).

Lessig is a Stanford law professor, but Free Culture is not a dry legal treatise about copyright law. Rather, it is a carefully argued, highly readable, and impassioned plea that society needs to reexamine the radical shift that has occurred in legal thinking about the mission and nature of copyright since the late 19th century, especially since there are other societal factors that heighten the effect of this shift.

Lessig describes the current copyright situation as follows:

For the first time in our tradition, the ordinary ways in which individuals create and share culture fall within the reach of the regulation of the law, which has expanded to draw within its control a vast amount of culture and creativity that it never reached before. The technology that preserved the balance of our history—between uses of our culture that were free and uses of our culture that were only upon permission—has been undone. The consequence is that we are less and less a free culture, more and more a permission culture.

How did we get here? Lessig traces the following major changes:

In 1790, the law looked like this:

  PUBLISH TRANSFORM
Commercial © Free
Noncommercial Free Free

The act of publishing a map, chart, and book was regulated by copyright law. Nothing else was. Transformations were free. And as copyright attached only with registration, and only those who intended to benefit commercially would register, copying through publishing of noncommercial work was also free.

By the end of the nineteenth century, the law had changed to this:

  PUBLISH TRANSFORM
Commercial © ©
Noncommercial Free Free

Derivative works were now regulated by copyright law—if published, which again, given the economics of publishing at the time, means if offered commercially. But noncommercial publishing and transformation were still essentially free.

In 1909 the law changed to regulate copies, not publishing, and after this change, the scope of the law was tied to technology. As the technology of copying became more prevalent, the reach of the law expanded. Thus by 1975, as photocopying machines became more common, we could say the law began to look like this:

  PUBLISH TRANSFORM
Commercial © ©
Noncommercial ©/Free Free

The law was interpreted to reach noncommercial copying through, say, copy machines, but still much of copying outside of the commercial market remained free. But the consequence of the emergence of digital technologies, especially in the context of a digital network, means that the law now looks like this:

  PUBLISH TRANSFORM
Commercial © ©
Noncommercial © ©

Lessig points out one of the ironies of copyright law’s development during the last few decades: the entertainment industries that have been the driving force behind moving the law from the permissive to permission side of the spectrum benefited from looser regulation in their infancies:

If "piracy" means using value from someone else’s creative property without permission from that creator—as it is increasingly described today—then every industry affected by copyright today is the product and beneficiary of a certain kind of piracy. Film, records, radio, cable TV. . . . The list is long and could well be expanded. Every generation welcomes the pirates from the last. Every generation—until now.

Returning to Flake’s model, what will the effect of a permission culture be on innovation? Lessig says:

This wildly punitive system of regulation will systematically stifle creativity and innovation. It will protect some industries and some creators, but it will harm industry and creativity generally. Free market and free culture depend upon vibrant competition. Yet the effect of the law today is to stifle just this kind of competition. The effect is to produce an overregulated culture, just as the effect of too much control in the market is to produce an overregulated-regulated market.

New knowledge typically builds on old knowledge, new content on old content. "Democratization of content" works if the content is completely new, if it builds on content that is in the public domain or under a Creative Commons (or similar) license, or if fair use can be invoked without it being stopped by DRM or lawsuits. If not, copyright permissions granted or withheld may determine if a digital "Rip, Mix, Burn" (or as some say "Rip, Mix, Learn") meme lives or dies and the full transformational potential of digital media are realized or not.

If you are concerned about the growing restrictions that copyright law imposes on society, I highly recommend that you read Free Culture.

Library 2.0

Walt Crawford has published a mega-issue of Cites & Insights: Crawford at Large on Library 2.0 that presents short essays on the topic by a large number of authors, plus his own view. At Walt’s request, I dashed off the following:

Blogs, tagging, Wikis, oh my! Whether "Library 2.0" truly transforms libraries’ Web presence or not, one thing is certain: the participative aspect of 2.0 represents a fundamental, significant change. Why? Because we will ask patrons to be become content creators, not just content consumers. And they will be interacting with each other, not just with the library. This will require what some have called "radical trust," meaning who knows what they will do or say, but the rich rewards of collective effort outweigh the risks. Or so the theory goes. Recent Wikipedia troubles suggest that all is not peaches and cream in Web 2.0 land. But, no one can deny (ok, some can) that participative systems can have enormous utility far beyond what one would have thought. Bugaboos, such as intellectual property violations, libel, and fiction presented as fact, of course, remain, leading to liability and veracity concerns that result in nagging musings over control issues. And it all is mixed in a tasty stew of enormous promise and some potential danger. This is a trend worth keeping a close eye on.

WP Twitter Auto Publish Powered By : XYZScripts.com