The issue that we identified as the biggest gap today is the perceived need for a secure digital identity for legitimate scholars, to help editors triage submissions into more and less trusted categories. We see opportunities for researcher identifiers to be used as the hub for much greater information about digital identity, in part by allowing publishers and other parties to submit markers of identity into identifier records. As examples, publishers that have processed APC transactions using credit cards have substantial signs of verified identity, as do universities that have securely linked an email address.
The boundaries of the scholarly record represent another aspect of research integrity that requires new forms of infrastructure. Of course the record has never had absolute boundaries. But in a subscription landscape, libraries played an important role in establishing the metes and bounds of the scholarly record (and what would be preserved over time) based on their selection decision-making. In a gold or diamond open access environment, libraries may have a reduced role and so other forms of boundary-setting may be required. Journal rankings may increasingly serve to set the boundaries of the scholarly record, although whether that is the right form of shared infrastructure, or whether it has the right governance and business model to allow it to serve this role without fear or favor, is not yet settled.
This report outlines IOI’s initial attempt towards a framework for understanding open infrastructure for research and scholarship. For this report, we examined a body of literature that includes works across the fields of anthropology, scholarly communications, international development studies, science and technology studies, and infrastructure studies.
A data commons is a cloud-based data platform with a governance structure that allows a community to manage, analyze and share its data. Data commons provide a research community with the ability to manage and analyze large datasets using the elastic scalability provided by cloud computing and to share data securely and compliantly, and, in this way, accelerate the pace of research. Over the past decade, a number of data commons have been developed and we discuss some of the lessons learned from this effort.
Persistent identifiers are applied to an ever-increasing variety of research objects, including software, samples, models, people, instruments, grants, and projects, and there is a growing need to apply identifiers at a finer and finer granularity. Unfortunately, the systems developed over two decades ago to manage identifiers and the metadata describing the identified objects no longer scale. Communities working with physical samples have grappled with these three challenges of the increasing volume, variety, and variability of identified objects for many years. To address this dual challenge, the IGSN 2040 project explored how metadata and catalogues for physical samples could be shared at the scale of billions of samples across an ever-growing variety of users and disciplines. In this paper, we focus on how we scale identifiers and their describing metadata to billions of objects and who the actors involved with this system are. Our analysis of these requirements resulted in the definition of a minimum viable product and the design of an architecture that not only addresses the challenges of increasing volume and variety but, more importantly, is easy to implement because it reuses commonly used Web components. Our solution is based on a Web architectural model that utilises Schema.org, JSON-LD, and sitemaps. Applying these commonly used architectural patterns on the internet allows us to not only handle increasing variety but also enable better compliance with the FAIR Guiding Principles.
We will discuss seven major open data platforms, such as (1) CKAN (2) DKAN (3) Socrata (4) OpenDataSoft (5) GitHub (6) Google datasets (7) Kaggle. We will evaluate the technological commons, techniques, features, methods, and visualization offered by each tool. In addition, why are these platforms important to users such as providers, curators, and end-users? And what are the key options available on these platforms to publish open data?
A bit-level object storage system is a foundational building block of long-term digital preservation (LTDP). To achieve the purposes of LTDP, the system must be able to: preserve the authenticity and integrity of the original digital objects; scale up with dramatically increasing demands for preservation storage; mitigate the impact of hardware obsolescence and software ephemerality; replicate digital objects among distributed data centers at different geographical locations; and to constantly audit and automatically recover from compromised states. . . . In this paper, we present OpenStack Swift, an open-source, mature and widely accepted cloud platform, as a practical and proven solution with a case study at the University of Alberta Library. We emphasize the implementation, application, cost analysis and maintenance of the system, with the purpose of contributing to the community with an exceedingly robust, highly scalable, self-healing and comparatively cost-effective bit-level object storage system for long-term digital preservation.
This article advances the thesis that three decades of investments by national and international funders, combined with those of scholars, technologists, librarians, archivists, and their institutions, have resulted in a digital infrastructure in the humanities that is now capable of supporting end-to-end research workflows. . . . The capabilities of the infrastructure remain unevenly distributed within and across disciplines, institutions, and regions. Moreover, the components, including the links between steps in the workflow, are generally far from user-friendly and seamless in operation. Because further refinements and additional capacities are still much needed, the article concludes with a discussion of key priorities for future work.
Michael Berman has published "Research Computing in the Cloud: Leveling the Playing Field" in EDUCAUSE Review.
Here's an excerpt:
The universal availability of commodity cloud services and high-speed networks can eliminate the requirement that departments must have local HPC resources. The infrastructure available from large cloud providers such as AWS dwarfs and outperforms all but the largest and most-specialized supercomputing facilities. . . .
Moving large data sets on commodity networks, or even on regional research and education networks, simply doesn't work well for hundreds of terabytes or petabytes of data, which is the scale required by modern researchers in many fields. . . .
To begin to address these issues, the Pacific Research Platform (PRP), a collaboration among research universities and CENIC (operator of the CalREN network serving California), has been funded by the National Science Foundation to support the streaming of "elephant flows."
The European Commission has released Implementation Roadmap for the European Open Science Cloud.
Here's an excerpt from the announcement:
Overall, the document presents the results and available evidence from an extensive and conclusive consultation process that started with the publication of the Communication: European Cloud initiative (COM(2016)178) in April 2016.
The consultation upheld the intervention logic presented in the Communication, to create a fit for purpose pan-European federation of research data infrastructures, with a view to moving from the current fragmentation to a situation where data is easy to store, find, share and re-use.
On the basis of the consultation, the implementation Roadmap gives and overview of six actions lines for the implementation of the EOSC:
a) architecture, b) data, c) services, d) access & interfaces, e) rules and f) governance.
Kyle Chard et al. have published "The Modern Research Data Portal: A Design Pattern for Networked, Data-Intensive Science" in PeerJ.
Here's an excerpt:
In this article, we first define the problems that research data portals address, introduce the legacy approach, and examine its limitations. We then introduce the MRDP design pattern and describe its realization via the integration of two elements: Science DMZs (Dart et al., 2013) (high-performance network enclaves that connect large-scale data servers directly to high-speed networks) and cloud-based data management and authentication services such as those provided by Globus (Chard, Tuecke & Foster, 2014). We then outline a reference implementation of the MRDP design pattern, also provided in its entirety on the companion web site, https://docs.globus.org/mrdp, that the reader can study—and, if they so desire, deploy and adapt to build their own high-performance research data portal. We also review various deployments to show how the MRDP approach has been applied in practice: examples like the National Center for Atmospheric Research's Research Data Archive, which provides for high-speed data delivery to thousands of geoscientists; the Sanger Imputation Service, which provides for online analysis of user-provided genomic data; the Globus data publication service, which provides for interactive data publication and discovery; and the DMagic data sharing system for data distribution from light sources. We conclude with a discussion of related technologies and summary.
CSDH/SCHN has released the "CSDH/SCHN Cyberinfrastructure Conversations Summary."
Here's an excerpt:
This is a high-level summary of the outcome of a series of conversations regarding the CFI Cyberinfrastructure Initiative among Canadian Digital Humanists. The conversations emerged from CSDH/SCHN consultations that began in the Spring of 2014. The document tries to reflect the priorities and areas of emphasis that have emerged from these discussions, and suggests several areas of focus for broad-based collaborative cyberinfrastructure that would serve the needs of many in the digital humanities research community. The diversity of work in the digital humanities makes it impossible to mention every need, but in the view of the CSDH executive, this summary covers a number of pressing needs from a range of research groups across the country, and balances the need to serve existing researchers with that of expanding access to important datasets and cyberinfrastructure to leading humanities researchers who are experimenting with advanced research computing.
Mary E. Piorun has self-archived her dissertaion "E-Science as a Catalyst for Transformational Change in University Research Libraries."
Here's an excerpt:
Changes in how research is conducted, from the growth of e-science to the emergence of big data, have lead to new opportunities for librarians to become involved in the creation and management of research data, at the same time the duties and responsibilities of university libraries continue to evolve. This study examines those roles related to e-science while exploring the concept of transformational change and leadership issues in bringing about such a change. Using the framework established by Levy and Merry for first- and second-order change, four case studies of libraries whose institutions are members in the Association of Research Libraries (ARL) are developed.
Christopher J. Shaffer has published "The Role of the Library in the Research Enterprise" in the latest issue of the Journal of eScience Librarianship.
Here's an excerpt:
Libraries have provided services to researchers for many years. Changes in technology and new publishing models provide opportunities for libraries to be more involved in the research enterprise. Within this article, the author reviews traditional library services, briefly describes the eScience and publishing landscape as it relates to libraries, and explores possible library programs in support of research. Many of the new opportunities require new partnerships, both within the institution and externally.
DuraSpace has released a recording of its Fit for Purpose: Developing Business Cases for New Services in Research Libraries webinar.
Here's an excerpt from the announcement:
Mike Furlough, Associate Dean of Research and Scholarly Communications, Penn State and David Minor Chronopolis Program Manager and Director of Digital Preservation Initiatives University of California San Diego Library/SDSC presented "Fit for Purpose: Developing Business Cases for New Services in Research Libraries" to participants in the DuraSpace/ARL/DLF E-Science Institute. In this webinar, the presenters discussed the CLIR/DLF-funded research project Fit for Purpose, which aims to present a structured, disciplined approach for making decisions about creating and maintaining new services in research libraries.
The JISC Observatory has released a draft for public comment of TechWatch: Preparing for Data-driven Infrastructure.
Here's an excerpt :
This report provides an overview of some concepts and approaches as well as tools, and can be used to help organisational planning. Specifically, this report:
- describes data-centric architectures;
- gives some examples of how data are already shared between organisations and discusses this from a datacentric perspective;
- introduces some of the key tools and technologies that can support data-centric architectures as well as some new models of data management, including opportunities to use "cloud" services;
- concludes with a look at the direction of travel and lists the sources cited in a References section.
The first issue's "full-length papers" are:
- "DataONE: Facilitating eScience through Collaboration"
- "An Assessment of Needed Competencies to Promote the Data Curation and Management Librarianship of Health Sciences and Science and Technology Librarians in New England"
- "Tiers of Research Data Support Services"
The Community Capability Model for Data-Intensive Research project has released a consultation draft of the Community Capability Model Framework.
Here's an excerpt:
The Community Capability Model Framework is a tool developed by UKOLN, University of Bath, and Microsoft Research to assist institutions, research funders and researchers in growing the capability of their communities to perform data--intensive research by
- profiling the current readiness or capability of the community,
- indicating priority areas for change and investment, and
- developing roadmaps for achieving a target state of readiness.
The Framework is comprised of eight capability factors representing human, technical and environmental issues. Within each factor are a series of community characteristics that are relevant for determining the capability or readiness of that community to perform data- intensive research.
Digital Scholarship has released the E-science and Academic Libraries Bibliography. It includes English-language articles, books, editorials, and technical reports that are useful in understanding the broad role of academic libraries in e-science efforts. The scope of this brief selective bibliography is narrow, and it does not cover data curation and research data management issues in libraries in general. Most sources have been published from 2007 through October 18, 2011; however, a limited number of key sources published prior to 2007 are also included. The bibliography includes links to freely available versions of included works, such as e-prints and open access articles.
Anne Agee, Theresa Rowe, Melissa Woo, and David Woods have published "Building Research Cyberinfrastructure at Small/Medium Research Institutions" in EDUCAUSE Quarterly.
Here's an excerpt:
To build a respectable cyberinfrastructure, the IT organizations at small/medium research institutions need to use creativity in discovering the needs of their researchers, setting priorities for support, developing support strategies, funding and implementing cyberinfrastructure, and building partnerships to enhance research support. This article presents the viewpoints of four small-to-medium-sized research universities who have struggled with the issue of providing appropriate cyberinfrastructure support for their research enterprises. All four universities have strategic goals for raising the level of research activity and increasing extramural funding for research.