"The Living Library: A Process-Based Tool for Open Literature Review, Probing the Boundaries of Open Science"


In this paper, we present a new tool for open science research, the Living Library. The Living Library provides an online platform and methodological framework for open, continuous literature reviewing. As a research medium, it explores what openness means in light of the human dimension and interpretive nature of engaging with societal questions. As a tool, the Living Library allows researchers to collectively sort, dynamically interpret and openly discuss the evolving literature on a topic of interest. The interface is built around a timeline along which articles can be filtered, themes with which articles are coded, and an open researcher logbook that documents the development of the library. The first rendition of a Living Library can be found via this link: https://eduvision-living-library.web.app/, and the code to develop your own Living Library can be found via this link: https://github.com/Simon-Dirks/living-library.

https://doi.org/10.1007/s43545-024-00964-z

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"Creating a Fully Open Environment for Research Code and Data"


Quantitative research in the social and natural sciences is increasingly dependent on new datasets and forms of code. Making these resources open and accessible is a key aspect of open research and underpins efforts to maintain research integrity. Erika Pastrana explains how Springer Nature developed Nature Computational Science to be fully compliant with open research and data principles.

https://tinyurl.com/7uwdxrrz

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

Paywall: "The FAIRification Process for Data Stewardship: A Comprehensive Discourse on the Implementation of the Fair Principles for Data Visibility, Interoperability and Management"


Using a systematic literature review, the study focuses on the implementation of these [FAIR] principles in research data management and their applicability in data repositories and data centres. It highlights the importance of implementing these principles systematically, allowing stakeholders to choose the minimum requirements and provide a vision for implementing them in data repositories and data centres. The article also highlights the steps in the FAIRification process, which can enhance data interoperability, discovery and reusability.

https://doi.org/10.1177/03400352241270692

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"An Analysis of the Impact of Gold Open Access Publications in Computer Science"


There has been some concern about the impact of predatory publishers on scientific research for some time. Recently, publishers that might previously have been considered `predatory’ have established their bona fides, at least to the extent that they are included in citation impact scores such as the field-weighted citation impact (FWCI). These are sometimes called ‘grey’ publishers (MDPI, Frontiers, Hindawi). In this paper, we show that the citation landscape for these grey publications is significantly different from the mainstream landscape and that affording publications in these venues the same status as publications in mainstream journals may significantly distort metrics such as the FWCI.

https://arxiv.org/abs/2408.10262

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

Paywall: "Research on the Generation Mechanism and Action Mechanism of Scientific Data Reuse Behavior"


Specifically, this study takes scientific data reuse attitudes as a breakthrough to discuss the factors that influence researchers’ scientific data reuse attitudes and the extent to which these factors influence scientific data reuse behaviors. It also further explores the impact of scientific data reuse behavior on research and innovation performance and the moderating effect of scientific data services on scientific data reuse behavior.

https://doi.org/10.1016/j.acalib.2024.102921

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"Lawmakers Raise New Licensing Concerns over White House Open Access Mandate"


While Republican appropriators in the House have previously tried to entirely block the White House’s open access policy, now appropriators in both chambers of Congress have advanced legislation that would block federal agencies from limiting authors’ ability to choose how to license their work. . . .

This language used in the House report and Senate report regarding researcher choice is identical, though the House goes further by advising federal agencies not to “exert broad ‘federal purpose’ authority over peer reviewed articles” or “otherwise force use of an open license.”

House Republicans also propose that the White House be prohibited from using any funding to implement the policy, as they attempted in last year’s legislation.

https://tinyurl.com/46y42ecr

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"Unfolding the Downloads of Datasets: A Multifaceted Exploration of Influencing Factors"


Scientific data are essential to advancing scientific knowledge and are increasingly valued as scholarly output. Understanding what drives dataset downloads is crucial for their effective dissemination and reuse. Our study, analysing 55,473 datasets from 69 data repositories, identifies key factors driving dataset downloads, focusing on interpretability, reliability, and accessibility. We find that while lengthy descriptive texts can deter users due to complexity and time requirements, readability boosts a dataset’s appeal. Reliability, evidenced by factors like institutional reputation and citation counts of related papers, also significantly increases a dataset’s attractiveness and usage. Additionally, our research shows that open access to datasets increases their downloads and amplifies the importance of interpretability and reliability. This indicates that easy access enhances the overall attractiveness and usage of datasets in the scholarly community. By emphasizing interpretability, reliability, and accessibility, this study offers a comprehensive framework for future research and guides data management practices toward ensuring clarity, credibility, and open access to maximize the impact of scientific datasets.

https://doi.org/10.1038/s41597-024-03591-8

| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"Infra Finder: a New Tool to Enhance Transparency, Discoverability and Trust in Open Infrastructure"


This paper describes Infra Finder, a new tool built by Invest in Open Infrastructure to help institutional budget holders and libraries make more informed decisions around adoption of and investment in open infrastructure. Through increased transparency and discoverability, we aim for this tool to foster trust in the decision-making process and to help build connections between services, users, and funders. The design of Infra Finder is intended to contribute to ongoing discussions and developments regarding trust and transparency in open scholarly infrastructure, as well as help level the playing field between organizations with limited resources to conduct extensive due diligence processes and those with their own analyst teams. In this work, we describe the landscape analysis that led to the creation of Infra Finder, the use cases for the tool, and the approach IOI is taking to create and foster use of Infra Finder in the open infrastructure environment. We also address some of the principles of trust in open source and open infrastructure that have informed and impacted the Infra Finder project and our work in creating this tool.

https://doi.org/10.2218/ijdc.v18i1.927

| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"Ten Simple Rules for Recognizing Data and Software Contributions in Hiring, Promotion, and Tenure"


The ways in which promotion and tenure committees operate vary significantly across universities and departments. While committees often have the capability to evaluate the rigor and quality of articles and monographs in their scientific field, assessment with respect to practices concerning research data and software is a recent development and one that can be harder to implement, as there are few guidelines to facilitate the process. More specifically, the guidelines given to tenure and promotion committees often reference data and software in general terms, with some notable exceptions such as guidelines in [5] and are almost systematically trumped by other factors such as the number and perceived impact of journal publications. The core issue is that many colleges establish a scholarship versus service dichotomy: Peer-reviewed articles or monographs published by university presses are considered scholarship, while community service, teaching, and other categories are given less weight in the evaluation process. This dichotomy unfairly disadvantages digital scholarship and community-based scholarship, including data and software contributions [6]. In addition, there is a lack of resources for faculties to facilitate the inclusion of responsible data and software metrics into evaluation processes or to assess faculty’s expertise and competencies to create, manage, and use data and software as research objects. As a result, the outcome of the assessment by the tenure and promotion committee is as dependent on the guidelines provided as on the committee members’ background and proficiency in the data and software domains.

The presented guidelines aim to help alleviate these issues and align the academic evaluation processes to the principles of open science. We focus here on hiring, tenure, and promotion processes, but the same principles apply to other areas of academic evaluation at institutions. While these guidelines are by no means sufficient for handling the complexity of a multidimensional process that involves balancing a large set of nuanced and diverse information, we hope that they will support an increasing adoption of processes that recognize data and software as key research contributions.

https://doi.org/10.1371/journal.pcbi.1012296

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"Sharing Practices of Software Artefacts and Source Code for Reproducible Research"


While source code of software and algorithms depicts an essential component in all fields of modern research involving data analysis and processing steps, it is uncommonly shared upon publication of results throughout disciplines. Simple guidelines to generate reproducible source code have been published. Still, code optimization supporting its repurposing to different settings is often neglected and even less thought of to be registered in catalogues for a public reuse. Though all research output should be reasonably curated in terms of reproducibility, it has been shown that researchers are frequently non-compliant with availability statements in their publications. These do not even include the use of persistent unique identifiers that would allow referencing archives of code artefacts at certain versions and time for long-lasting links to research articles. In this work, we provide an analysis on current practices of authors in open scientific journals in regard to code availability indications, FAIR principles applied to code and algorithms. We present common repositories of choice among authors. Results further show disciplinary differences of code availability in scholarly publications over the past years. We advocate proper description, archiving and referencing of source code and methods as part of the scientific knowledge, also appealing to editorial boards and reviewers for supervision.

https://doi.org/10.1007/s41060-024-00617-7

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"Back to Basics: Considering Categories of Data Services Consults"


Consultations are fundamental to data librarianship, serving as a vital means of one-on-one support for researchers. However, the topics and forms of support unique to data services consults are not always carefully considered. This commentary addresses five common services offered by data librarians—dataset reference, data management support, data analysis and software support, data curation, and data management (and sharing) plan writing—and considers strategies for successful patron support within the boundaries of a consultation.

https://doi.org/10.7191/jeslib.931

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"Promoting Data Sharing: The Moral Obligations of Public Funding Agencies"


Sharing research data has great potential to benefit science and society. However, data sharing is still not common practice. Since public research funding agencies have a particular impact on research and researchers, the question arises: Are public funding agencies morally obligated to promote data sharing? We argue from a research ethics perspective that public funding agencies have several pro tanto obligations requiring them to promote data sharing. However, there are also pro tanto obligations that speak against promoting data sharing in general as well as with regard to particular instruments of such promotion. We examine and weigh these obligations and conclude that all things considered funders ought to promote the sharing of data. Even the instrument of mandatory data sharing policies can be justified under certain conditions.

https://doi.org/10.1007/s11948-024-00491-3

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"The State of Open Infrastructure Funding: A Recap of IOI’s Community Conversation "


In July, IOI hosted its second State of Open Infrastructure Community Conversation — this time, exploring the state of open infrastructure grant funding.

To set the stage,, IOI’s senior researcher Gail Steinhart provided an overview of the methods that were used to gather over $415M USD in grant funding data for open infrastructures (OIs) and broke down some of the key findings from the analysis. To dive further into the topic of funding data, IOI Executive Director Kaitlin Thaney facilitated a panel conversation that featured Steinhart, collaborators Cameron Neylon and Karl Huang from the Curtin Open Knowledge Initiative (COKI), and John Mohr, CIO of Information Technology for theMacArthur Foundation and co-founder of the Philanthropy Data Commons. With their extensive experience in grant funding from diverse perspectives of the scholarly ecosystem, the panel shed light on the trends, impact, and limitations of grant funding for OIs. . . . .

Across the grants the team mapped for the 36 open infrastructures represented in this dataset, awards were categorized to reflect whether they provide direct support to an OI, indirect support (meaning the OI is referenced in the award title or abstract, but the funding does not directly support the OI though it may provide some indication of on OI’s broader impact), adoption support (funding that supports the implementation of an instance of an OI at a local or community scale), and grants we were unable to classify (unknown). While a significant amount (42%) of funding goes to direct support, the majority of the funding (52%) goes to indirect support.

https://tinyurl.com/ye2yfzsr

Video

Dataset

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"Reproducible and Attributable Materials Science Curation Practices: A Case Study"


While small labs produce much of the fundamental experimental research in Material Science and Engineering (MSE), little is known about their data management and sharing practices and the extent to which they promote trust in and transparency of the published research. In this research, a case study is conducted on a leading MSE research lab [at MIT] to characterize the limits of current data management and sharing practices concerning reproducibility and attribution. The workflows are systematically reconstructed, underpinning four research projects by combining interviews, document review, and digital forensics. Then, information graph analysis and computer-assisted retrospective auditing are applied to identify where critical research information is unavailable orat risk.

Data management and sharing practices in this leading lab protect against computer and disk failure; however, they are insufficient to ensure reproducibility or correct attribution of work,especiallywhen a group member withdraws before the project completion.Therefore, recommendations for adjustments in MSE data management and sharing practices are proposed to promote trustworthiness and transparency by adding lightweight automated file-level auditing and automated data transfer processes.

https://doi.org/10.2218/ijdc.v18i1.940

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

Paywall: "What Is Research Data ‘Misuse’? And How Can It Be Prevented or Mitigated?"


In the article, we emphasize the challenge of defining misuse broadly and identify various forms that misuse can take, including methodological mistakes, unauthorized reuse, and intentional misrepresentation. We pay particular attention to underscoring the complexity of defining misuse, considering different epistemological perspectives and the evolving nature of scientific methodologies. We propose a theoretical framework grounded in the critical analysis of interdisciplinary literature on the topic of misusing research data, identifying similarities and differences in how data misuse is defined across a variety of fields, and propose a working definition of what it means to "misuse" research data. Finally, we speculate about possible curatorial interventions that data intermediaries can adopt to prevent or respond to instances of misuse.

https://doi.org/10.1002/asi.24944

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"Training to Act FAIR: A Pre-Post Study on Teaching FAIR Guiding Principles to (Future) Researchers in Higher Education"


With a pre-post test design, the study evaluates the short-term effectiveness of FAIR training on students’ scientific suggestions and justifications in line with FAIR’s guiding principles. The study also assesses the influence of university legal frameworks on students’ inclination towards FAIR training. Before FAIR training, 81.1% of students suggested that scientific actions were not in line with the FAIR guiding principles. However, there is a 3.75-fold increase in suggestions that adhere to these principles after the training. Interestingly, the training does not significantly impact how students justify FAIR actions. The study observes a positive correlation between the presence of university legal frameworks on FAIR guiding principles and students’ inclination towards FAIR training.

https://doi.org/10.1007/s10805-024-09547-2

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"The Promotion and Implementation of Open Science Measures among High-Performing Journals from Brazil, Mexico, Portugal, and Spain"


This study empirically examined the promotion and implementation of open science measures among high-performing journals of Brazil, Mexico, Portugal, and Spain. Journal policy related to data sharing, materials sharing, preregistration, open peer review, and consideration of preprints and replication studies was gathered from the websites of the journals. . . . Analyses found a higher promotion of open science measures among Brazilian journals than their Portuguese counterparts, and higher promotion of open science measures among international journals than their domestic counterparts. Analyses found higher implementation of open science measures among Brazilian journals than their Portuguese and Mexican counterparts. One journal out of 40 encouraged preregistration of studies; none encouraged replication studies and none had implemented open peer review.

https://doi.org/10.1002/leap.1616

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"Privacy Protection Framework for Open Data: Constructing and Assessing an Effective Approach"


This framework [Privacy Protection Framework for Open Data] aims to establish clear privacy protection measures and safeguard individuals’ privacy rights. Existing privacy protection practices were examined using content analysis, and 36 indicators across five dimensions were developed and validated through an empirical study with 437 participants. The PPFOD offers comprehensive guidelines for data openness, empowering individuals to identify privacy risks, guiding businesses to ensure legal compliance and prevent data leaks, and assisting libraries and data institutions in implementing effective privacy education and training programs, fostering a more privacy-conscious and secure data era.

https://doi.org/10.1016/j.lisr.2024.101312

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"The Societal Impact of Open Science: A Scoping Review"


Open Science (OS) aims, in part, to drive greater societal impact of academic research. Government, funder and institutional policies state that it should further democratize research and increase learning and awareness, evidence-based policy-making, the relevance of research to society’s problems, and public trust in research. Yet, measuring the societal impact of OS has proven challenging and synthesized evidence of it is lacking. This study fills this gap by systematically scoping the existing evidence of societal impact driven by OS and its various aspects, including Citizen Science (CS), Open Access (OA), Open/FAIR Data (OFD), Open Code/Software and others. Using the PRISMA Extension for Scoping Reviews and searches conducted in Web of Science, Scopus and relevant grey literature, we identified 196 studies that contain evidence of societal impact. The majority concern CS, with some focused on OA, and only a few addressing other aspects. Key areas of impact found are education and awareness, climate and environment, and social engagement. We found no literature documenting evidence of the societal impact of OFD and limited evidence of societal impact in terms of policy, health, and trust in academic research. Our findings demonstrate a critical need for additional evidence and suggest practical and policy implications.

https://doi.org/10.1098/rsos.240286

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

Research Data Alliance: Recommendations on Open Science Rewards and Incentives


Open Science contributes to the collective building of scientific knowledge and societal progress. However, academic research currently fails to recognise and reward efforts to share research outputs. Yet it is crucial that such activities be valued, as they require considerable time, energy, and expertise to make scientific outputs usable by others, as stated by the FAIR principles. To address this challenge, several bottom-up and top-down initiatives have emerged to explore ways to assess and credit Open Science activities (e.g., Research Data Alliance, RDA) and to promote the assessment of a broad spectrum of research outputs, including datasets and software (e.g., Coalition for Advancement of Research Assessment, CoARA). As part of the RDA-SHARC (SHAring Rewards and Credit) interest group, we have developed a set of recommendations to help implement various rewarding schemes at different levels. The recommendations target a broad range of stakeholders. For instance, institutions are encouraged to provide digital services and infrastructure, organise training and cover expenses associated with making data available for the community. The funders should establish policies requiring open access to data produced by funded research and provide corresponding support. The publishers should favour open peer-review models and open access to articles, data and software. Government policymakers should set up a comprehensive Open Science strategy, as recommended by UNESCO and followed by a growing number of countries. The present work details different measures that are proposed to the stakeholders. The need to include sharing activities in research evaluation schemes as an overarching mechanism to promote Open Science practices is specifically emphasised.

https://tinyurl.com/4rhk44mn

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"An Empirical Examination of Data Reuser Trust in a Digital Repository"


Most studies of trusted digital repositories have focused on the internal factors delineated in the Open Archival Information System (OAIS) Reference Model—organizational structure, technical infrastructure, and policies, procedures, and processes. Typically, these factors are used during an audit and certification process to demonstrate a repository can be trusted. The factors influencing a repository’s designated community of users to trust it remains largely unexplored. This article proposes and tests a model of trust in a data repository and the influence trust has on users’ intention to continue using it. Based on analysis of 245 surveys from quantitative social scientists who published research based on the holdings of one data repository, findings show three factors are positively related to data reuser trust—integrity, identification, and structural assurance. In turn, trust and performance expectancy are positively related to data reusers’ intentions to return to the repository for more data. As one of the first studies of its kind, it shows the conceptualization of trusted digital repositories needs to go beyond high-level definitions and simple application of the OAIS standard. Trust needs to encompass the complex trust relationship between designated communities of users that the repositories are being built to serve.

https://doi.org/10.1002/asi.24933

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"Copyright, the Right to Research and Open Science: About Time to Connect the Dots"


In this contribution, we highlight the necessity to design a research-enabling copyright framework that provides researchers with access to the necessary knowledge, information and data, and to tackle the challenges of the future.

For that purpose, we examine copyright through the prism of the Open Science movement and in the light of a "right to research " and connect both to a larger, constitutional argument which suggests that enabling research through copyright law is a pressing constitutional imperative. Based on this theoretical framework, we suggest substantive and institutional modifications to copyright law, through legislative interventions and judicial interpretations that would remove significant barriers towards open science as envisaged by European and international institutions. The conflict between the proprietary interests of rightholders and the societal interests in unhindered, purpose-bound research should, in case of doubt, be decided in favour of research and open science as crucial enablers for innovation and progress. For authors, remuneration is most of the time not the primary motivation or incentive to produce research; they can often rely on other revenues (e.g. through institutional employment) and other interest prevail, such as the broadest possible dissemination of their works that will secure them reputation and career advancement. The incentive mechanisms therefore are entirely different in the research field compared to other creative sectors, an aspect that must be taken into account when designing a research-friendly copyright system.

https://ssrn.com/abstract=4857765

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |