This paper provides a literature review of academic library collection activities related to the provision of streaming video content in support of higher education curricula. It reviews the budgetary, collection management, licensing, technology, and acquisition processes and associated challenges that the provision of streaming video content poses for academic libraries in offering this much needed content to their patron base. The authors present a recent history of developing academic video collections, the transition to and increased demand for the streaming video format, and the evaluations required of funding models and vendor supply models to make the availability of streaming video content sustainable in the academic environment.
Late Friday, some of the world’s largest record labels, including Sony and Universal Music Group, filed a lawsuit against the Internet Archive and others for the Great 78 Project, a community effort for the preservation, research and discovery of 78 rpm records that are 70 to 120 years old. . . .
Of note, the Great 78 Project has been in operation since 2006 to bring free public access to a largely forgotten but culturally important medium. Through the efforts of dedicated librarians, archivists and sound engineers, we have preserved hundreds of thousands of recordings that are stored on shellac resin, an obsolete and brittle medium. The resulting preserved recordings retain the scratch and pop sounds that are present in the analog artifacts; noise that modern remastering techniques remove.
These preservation recordings are used in teaching and research, including by university professors like Jason Luther of Rowan University, whose students use the Great 78 collection as the basis for researching and writing podcasts for use in class assignments . . . While this mode of access is important, usage is tiny—on average, each recording in the collection is only accessed by one researcher per month.
Images have been historical records since the advent of photography. High-resolution photography laid the groundwork for the digitization process known today and has continued to bolster the cultural heritage sector. An overview of images in the context of library and information science (LIS) is a story of how libraries have adopted aspects of the commercial image production environment, expensive digitization equipment, and considerable information technology infrastructure to provide image resources to their users. This entry [of the Encyclopedia of Libraries, Librarianship, and Information Science] discusses images in the LIS field and considers the concepts, tools, and best practices that surround the prevalence of images as primary sources.
Schol-AR transforms standard scientific PDF articles into fully digital entities, enabling the inclusion of interactive digital media and scientific data directly into manuscripts. Schol-AR is designed specifically to provide full digital integration in a manner that benefits the publishers, authors, and readers of the research community. An introductory video can be seen at https://www.Schol-AR.io/demo/
Currently, there is a divide between A.I. image generators and A.I. text generators, like OpenAI’s ChatGPT.. . . Meta’s tool breaks down that divide with a model that allows for the input and generation of text and images, and allows for the creation of captions (or image-to-text generation) and images with "super-resolution."
Deliverable 13.2 aims to build on our understanding of what it means to support FAIR in the sharing of image data derived from GLAM collections. This report looks at previous efforts by the sector towards FAIR alignment and presents 5 recommendations designed to be implemented and tested at the DRI that are also broadly applicable to the work of the GLAMs. The recommendations are ultimately a roadmap for the Digital Repository of Ireland (DRI) to follow in improving repository services, as well as a call for continued dialogue around "what is FAIR?" within the cultural heritage research data landscape.
Artificial intelligence (AI) can support metadata creation for images by generating descriptions, titles, and keywords for digital collections in libraries. Many AI options are available, ranging from cloud-based corporate software solutions, including Microsoft Azure Custom Vision and Google Cloud Vision, to open-source locally hosted software packages. This case study examines the feasibility of deploying the open-source, locally hosted AI software, Sheeko, and the accuracy of the descriptions generated for images using two of the pre-trained models. The study aims to ascertain if Sheeko’s AI would be a viable solution for producing metadata in the form of descriptions, or titles for digital collections in Libraries and Cultural Resources at the University of Calgary.
We present Imagen Video, a text-conditional video generation system based on a cascade of video diffusion models. Given a text prompt, Imagen Video generates high definition videos using a base video generation model and a sequence of interleaved spatial and temporal video super-resolution models. . . . We find Imagen Video not only capable of generating videos of high fidelity, but also having a high degree of controllability and world knowledge, including the ability to generate diverse videos and text animations in various artistic styles and with 3D object understanding.
Michael Casey has published "Quality Control for Media Digitization Projects" in the Journal of the International Association of Sound and Audiovisual Archives.
Here's an excerpt:
his article defines types of quality control and explores risk management strategies that are broadly applicable to any organization engaged in media digitization for long-term preservation. It uses the quality control system for audio and video digitization that was developed by Indiana University’s Media Digitization and Preservation Initiative to provide examples and illustrations of these ideas.
Julia Kim, Rebecca Fraimow and Erica Titkemeyer have published "Never Best Practices: Born-Digital Audiovisual Preservation" in Code4Lib Journal.
Here's an excerpt:
The sheer conditionality of [born-digital audiovisual file preservation] recommendations leaves practitioners mired in a sea of questions as they struggle to set realistically adhered to policies for their institutions. Should files be accepted as-is, or transcoded to an open and standardized format? When is transcoding to a preservation file specification worth the extra storage space and staff time? If transcoding, what are the ideal target specifications? When developing policies and workflows for batch transcoding a variety of different formats, each with different technical specifications, how do you make sure that preservation files maintain all the perceptible, let alone "significant" characteristics of the original files?
This paper presents case studies from three institutions—a university special collections library, a federal government department, and a public broadcasting station—demonstrating how the factors listed above might lead to 'tiered' processing and decision-making around 'good enough' practices for the preservation of born-digital a/v files.