This article is a case study describing the implementation of Islandora 2 to create a public online portal for the discovery, access, and use of archives and special collections materials at the University of Nevada, Las Vegas. The authors will explain how the goal of providing users with a unified point of access across diverse data (including finding aids, digital objects, and agents) led to the selection of Islandora 2 and they will discuss the benefits and challenges of using this open source software. They will describe the various steps of implementation, including custom development, migration from CONTENTdm, integration with ArchivesSpace, and developing new skills and workflows to use Islandora most effectively. As hindsight always provides additional perspective, the case study will also offer reflection on lessons learned since the launch, insights on open-source repository sustainability, and priorities for future development.
Bringing together contributions from practitioners and academics to offer a range of international case studies, this book offers practical solutions for archivists in terms of governance, technologies and processes. It highlights and analyses the cornerstones of the Nordic model of archiving: reliance on standards; powerful regulatory instruments — especially in public sector archiving, including legislation; and collaboration between archivists and government agencies, and among different tiers of central and local government.
One of four open access chapters: "The Nordic Model of Digital Archiving."
Antique books, old and rare documents are fragile and vulnerable to different hazards. Preserving them for an extended period is a real challenge. From ancient times people started expressing their knowledge by writing and keeping records and subsequently started collecting and storing these at later ages as antique materials. These can be seen in different museums, libraries, archives, individual households, and other places all over the world. Preserving and conserving these antique, old and rare books, documents etc. in good condition is a challenge for librarians, conservators, preservation administrators or persons associated with storing these. In this paper, details of the digital preservation of such a collection available in the Directorate of Historical and Antiquarian Studies (DHAS), Guwahati, Assam, India, are discussed. DHAS is a Government of Assam wing and is mainly mandated to collect, preserve and research historical and antiquarian resources. The collection of DHAS is one of the oldest collections and has been serving as a study and research centre in Assam since 1928. A special drive has been taken for the digital preservation of an identified part of the collection, with grant support from the National Archive of India. This paper discusses the entire project process starting from the project proposal formulation to the structuring of the digital collection. The paper sequentially discusses the different steps of the entire work of digitization of a collection of 241 old and rare books from the main collection of DHAS.
A key responsibility for many library publishers is to collaborate with authors to determine the best mechanisms for sharing and publishing research. Librarians are often asked to assist with a wide range of research outputs and publication types, including eBooks, digital humanities (DH) projects, scholarly journals, archival and thematic collections, and community projects. These projects can exist on a variety of platforms both for profit and academy owned. Additionally, over the past decade, more and more academy owned platforms have been created to support both library publishing programs. Library publishers who wish to emphasize open access and open-source publishing can feel overwhelmed by the proliferation of available academy-owned or -affiliated publishing platforms. For many of these platforms, documentation exists but can be difficult to locate and interpret. While experienced users can usually find and evaluate the available resources for a particular platform, this kind of documentation is often less useful to authors and librarians who are just starting a new publishing project and want to determine if a given platform will work for them. Because of the challenges involved in identifying and evaluating the various platforms, we created this comparative crosswalk to help library publishers (and potentially authors) determine which platforms are right for their services and authors’ needs.
When Omeka S appeared as a beta release in 2016, it offered the opportunity for researchers or larger organizations to publish multiple Omeka sites from the same installation. Multisite functionality was and continues to be a major advance for what had become the premiere platform for scholarly digital exhibits produced by libraries, museums, researchers, and students. However, while geared to larger institutional contexts, Omeka S poses some user experience challenges on the back end for larger organizations with numerous users creating different sites. These challenges include a "cluttered" effect for many users seeing resources they do not need to access and data integrity challenges due to the possibility of users editing resources that other users need in their current state. The University of Illinois Library, drawing on two local use cases as well as two additional external use cases, developed the Teams module to address these challenges. This article describes the needs leading to the decision to create the module, the project requirement gathering process, and the implementation and ongoing development of Teams. The module and findings are likely to be of interest to other institutions adopting Omeka S but also, more generally, to libraries seeking to contribute successfully to larger open-source initiatives.
Background/Aims: Deidentified individual participant data (IPD) sharing has been implemented in the International Committee of Medical Journal Editors journals since 2017. However, there were some published clinical trials that did not follow the new implemented policy. This study examines the number of clinical trials that endorsed IPD sharing policy among top ophthalmology journals.
Method: All published original articles in 2021 in 10 highest-ranking ophthalmology journals according to the 2020 journal impact factor were included. Clinical trials were determined by the WHO definition of clinical trials. Each article was then thoroughly searched for the IPD sharing statement either in the manuscript or in the clinical trial registry. We collected the number of published clinical trials that implemented IPD sharing policy as our primary outcome.
Results: 1852 published articles in top 10 ophthalmology journals were identified, and 9.45% were clinical trials. Of these clinical trials, 44% had clinical trial registrations and 49.14% declared IPD sharing statements. Only 42 (48.83%) clinical trials were willing to share IPD, and 5 (10.21%) of these share IPD via an online repository platform. In terms of sharing period, 37 clinical trials were willing to share right after the publication and only 2 showed the ending of sharing period.
Conclusion: This report shows that the number of clinical trials in top ophthalmology journals that endorsed the IPD sharing policy and the number of registrations is lower than half even though the policy has been implemented for several years. Future updates are necessary as policy evolves.
This dissertation introduces three primary contributions through publicly deployed sys- tems and datasets. First, we demonstrate how the construction of large-scale cultural heritage datasets using machine learning can answer interdisciplinary questions in library & information science and the humanities (Chapter 2). Second, based on the feedback of users of these cultural heritage datasets, we introduce open faceted search, an extension of faceted search that leverages human-AI interaction affordances to empower users to define their own facets in an open domain fashion (Chapter 3). Third, encountering similar challenges with the deluge of scientific papers, we explore the question of how to improve recommender systems through human-AI interaction and tackle the broad challenge of advice taking for opaque machine learners (Chapter 4).
Many governments have chosen to store their records in the cloud rather than invest in the increased digital infrastructure now required to manage them.. . . Yet, archivists and archival perspectives have not been much involved in public discussion of this change. . . . The shape of the emerging infrastructure underpinning the management of digital communication may well be the most significant lasting feature of the digital environment for societies and their archives. This article discusses why that development requires archival voices in the public square to address it.
Deliverable 13.2 aims to build on our understanding of what it means to support FAIR in the sharing of image data derived from GLAM collections. This report looks at previous efforts by the sector towards FAIR alignment and presents 5 recommendations designed to be implemented and tested at the DRI that are also broadly applicable to the work of the GLAMs. The recommendations are ultimately a roadmap for the Digital Repository of Ireland (DRI) to follow in improving repository services, as well as a call for continued dialogue around "what is FAIR?" within the cultural heritage research data landscape.
Artificial intelligence (AI) can support metadata creation for images by generating descriptions, titles, and keywords for digital collections in libraries. Many AI options are available, ranging from cloud-based corporate software solutions, including Microsoft Azure Custom Vision and Google Cloud Vision, to open-source locally hosted software packages. This case study examines the feasibility of deploying the open-source, locally hosted AI software, Sheeko, and the accuracy of the descriptions generated for images using two of the pre-trained models. The study aims to ascertain if Sheeko’s AI would be a viable solution for producing metadata in the form of descriptions, or titles for digital collections in Libraries and Cultural Resources at the University of Calgary.
The Kentucky Digital Newspaper Program (KDNP) was born out of the University of Kentucky Libraries’ (UKL) work in the National Digital Newspaper Program (NDNP) that began in 2005. In early 2021, a team of specialists at UKL from library systems, digital archives, and metadata management was formed to explore a new approach to searching this content by leveraging the power of the library services platform (Alma) and discovery system (Primo VE) licensed from Ex Libris. The result was the creation of a dedicated Primo VE search interface that would include KDNP content as well as all Kentucky newspapers held on microfilm in the UKL system. This article will describe the journey from the question of whether we could harness the power of Alma and Primo VE to display KDNP content, to the methodology used in creating a new dedicated search interface that can be replicated to create custom search interfaces of your own.
"Anyone can download, reuse, and remix these images at any time — for free under the Creative Commons Zero (CC0) license," write My Modern Met’s Jessica Stewart and Madeleine Muzdakis. "A dive into the 3D records shows everything from CAD models of the Apollo 11 command module to Horatio Greenough’s 1840 sculpture of George Washington."
The University of Oregon and Oregon State University are proud to announce the launch of Oregon Digital, a cultural heritage repository that brings together more than 500,000 digitized works from both universities, including unique digitized and born-digital collections. This collaborative effort includes historic and modern photographs, manuscripts, publications, and more.
Archival description is often misunderstood by librarians, administrators, and technologists in ways that have seriously hindered the development of access and discovery systems. It is not widely understood that there is currently no off-the-shelf system that provides discovery and access to digital materials using archival methods. This article is an overview of the core differences between archival and bibliographic description, and discusses how to design access systems for born-digital and digitized materials using the affordances of archival metadata. It offers a custom indexer as a working example that adds the full text of digital content to an Arclight instance and argues that the extensibility of archival description makes it a perfect match for automated description. Finally, it argues that building archives-first discovery systems allows us to use our descriptive labor more thoughtfully, better enable digitization on demand, and overall make a larger volume of cultural heritage materials available online.
In late 2021, the Library of Congress adopted several exemptions to the Digital Millennium Copyright Act (DMCA) provision prohibiting circumvention of technological measures that control access to copyrighted works. In other words, they created a set of exceptions to the general legal rule against cracking digital locks on things like DVDs, software, and video games. The exemptions are set out in regulations published by the Copyright Office. They went into effect on October 28, 2021 and last until October 28th, 2024. This guide is intended to help preservationists determine whether their activities are protected by the new exemptions. It includes important updates to the first edition to reflect changes in the rule to allow offsite access to non-game software, along with a few other technical changes.
Access to data is seen as a key priority today. Yet, the vast majority of digital cultural data preserved in archives is inaccessible due to privacy, copyright or technical issues. Emails and other born-digital collections are often uncatalogued, unfindable and unusable. In the case of documents that originated in paper format before being digitised, copyright can be a major obstacle to access. To solve the problem of access to digital archives, cross-disciplinary collaborations are absolutely essential. The big challenges of our time—from global warming to social inequalities—cannot be solved within a single discipline. The same applies to the challenge of "dark" archives closed to users. We cannot expect archivists or digital humanists to find a magical solution that will instantly make digital records more accessible. Instead, we need to set up collaborations across disciplines that seldom talk to each other. Based on 21 interviews with 26 archivists, librarians and other professionals in cultural institutions, we identify key obstacles to making digitised and born-digital collections more accessible to users. We outline current levels of access to a wide range of collections in various cultural organisations, including no access at all and limited access (for example, when users are required to travel on-site to consult documents). We suggest possible solutions to the problems of access—including the ethical use of Artificial Intelligence to unlock “dark” archives inaccessible to users. Finally, we propose the creation of a global user community who would participate in decisions on access to digital collections.
Policy makers produce digital records on a daily basis. A selection of records is then preserved in archival repositories. However, getting access to these archival materials is extremely complicated for many reasons—including data protection, sensitivity, national security, and copyright. Artificial Intelligence (AI) can be applied to archives to make them more accessible, but it is still at an experimental stage. While skills gaps contribute to keeping archives ‘dark’, it is also essential to examine issues of mistrust and miscommunication. This article argues that although civil servants, archivists, and academics have similar professional principles articulated through professional codes of ethics, these are not often communicated to each other. This lack of communication leads to feelings of mistrust between stakeholders. Mistrust of technology also contributes to the barriers to effective implementation of AI tools. Therefore, we propose that surfacing the shared professional ethics between stakeholders can contribute to deeper collaborations between humans. In turn, these collaborations can lead to the building of trust in AI systems and tools. The research is informed by semi-structured interviews with thirty government professionals, archivists, historians, digital humanists, and computer scientists. Previous research has largely focused on preservation of digital records, rather than access to these records, and on archivists rather than records creators such as government professionals. This article is the first to examine the application of AI to digital archives as an issue that requires trust and collaboration across the entire archival circle (from record creators to archivists, and from archivists to users).
Four main themes were identified: fitting AI into day to day practice; the responsible use of (AI) technology; managing expectations (about AI adoption) and bias associated with the use of AI. The analysis suggests that AI adoption combined with hindsight about digitisation as a disruptive technology might provide archival practitioners with a framework for re-defining, advocating and outlining digital archival expertise.