The paper is divided into three parts. Part 1 traces the historical events that led to the modern system of scientific research, funding, knowledge dissemination, and recognition, which largely confines health and medical knowledge production to those in HICs [high income countries]. By understanding our shared past and the rise of structural barriers to global health equity, we can better inform our shared path to dismantle them. Part 2 takes a clear-eyed look at where the scientific community is now. Are the ideals of Open Medicine playing out as envisioned? Are the benefits of Open Medicine shared amongst all of humanity, or with only a select few? Lastly, Part 3 offers ideas and recommendations for all stakeholders to chart a path to bring Open Medicine into alignment with its goals and aspirations.
The article seeks to contribute to this aim by exploring the legal framework in which research data can be accessed and used in EU copyright law. First, it delineates the authors’ understanding of research data. It then examines the protection research data currently receives under EU and Member State law via copyright and related rights, as well as the ownership of these rights by different stakeholders in the scientific community. After clarifying relevant conflict-of-laws issues that surround research data, it maps ways to legally access and use them, including statutory exceptions, the open science movement and current developments in law and practice.
In the WorldFAIR project, CODATA (the Committee on Data of the International Science Council), with the RDA (Research Data Alliance) Association as a major partner, is working with a set of eleven disciplinary and cross-disciplinary case studies to advance implementation of the FAIR principles and, in particular, to improve interoperability and reusability of digital research objects, including data.
To that end, the WorldFAIR project created a range of FAIR Implementation Profiles (FIPs) between July and October 2022 to better understand current FAIR data-related practices. The report, "FAIR Implementation Profiles (FIPs) in WorldFAIR: What Have We Learnt?", is published this week and available at https://doi.org/10.5281/zenodo.7378109.
The report describes the WorldFAIR project, its objectives and its rich set of Case Studies; and it introduces FIPs as a methodology for listing the FAIR implementation decisions made by a given community of practice. Subsequently, the report gives an overview of the initial feedback and findings from the Case Studies, and considers a number of issues and points of discussion that emerged from this exercise. Finally, and most importantly, we describe how we think the experience of using FIPs will assist each Case Study in its work to implement FAIR, and will assist the project as a whole in the development of two key outputs: the Cross-Domain Interoperability Framework (CDIF), and domain-sensitive recommendations for FAIR assessment.
The creation of library research data services (RDS) requires assessment of their maturity, i.e., the primary objective of this study. Its authors have set out to probe the nationwide level of library RDS maturity, based on the RDS maturity model, as proposed by Cox et al. (2019), while making use of natural language processing (NLP) tools, typical for big data analysis. The secondary objective consisted in determining the actual suitability of the above-referenced tools for this particular type of assessment.
The European research and innovation ecosystem is going through a period of profound change. Researchers, organisations that fund or perform research, and policymakers are reshaping the research process and its outputs based on the opportunities offered by the digital transition. The findability, accessibility, interoperability, and reusability (FAIRness) of research publications, data, and software in the digital space will define research and innovation going forward. Closely related, the transition to an open research process and Open Access of its outputs is becoming the ‘new normal’. One of the most prominent initiatives in the digital and open transition of research is the European Open Science Cloud (EOSC). This federation of existing research data infrastructures in Europe aims to create a web of FAIR data and related services for research.
Journal policies continuously evolve to enable knowledge sharing and support reproducible science. However, that change happens within a certain framework. Eight modular standards with three levels of increasing stringency make Transparency and Openness Promotion (TOP) guidelines which can be used to evaluate to what extent and with which stringency journals promote open science. Guidelines define standards for data citation, transparency of data, material, code and design and analysis, replication, plan and study pre-registration, and two effective interventions: "Registered reports" and "Open science badges", and levels of adoption summed up across standards define journal’s TOP Factor. In this paper, we analysed the status of adoption of TOP guidelines across two thousand journals reported in the TOP Factor metrics. We show that the majority of the journals’ policies align with at least one of the TOP’s standards, most likely "Data citation" (70%) followed by "Data transparency" (19%). Two-thirds of adoptions of TOP standard are of the stringency Level 1 (less stringent), whereas only 9% is of the stringency Level 3. Adoption of TOP standards differs across science disciplines and multidisciplinary journals (N = 1505) and journals from social sciences (N = 1077) show the greatest number of adoptions. Improvement of the measures that journals take to implement open science practices could be done: (1) discipline-specific, (2) journals that have not yet adopted TOP guidelines could do so, (3) the stringency of adoptions could be increased.
The Open Science movement is a response to the accumulated problems in scholarly communication, like the "reproducibility crisis", "serials crisis", and "peer review crisis". The European Commission defines priorities of Open Science as Findable, Accessible, Interoperable and Reproducible (FAIR) data, infrastructure and services in the European Open Science Cloud (EOSC), Next generation metrics, altmetrics and rewards, the future of scientific communication, research integrity and reproducibility, education and skills and citizen science. Open Science Infrastructure is also one of four key components of Open Science defined by UNESCO.
Mainly represented among Open Science Infrastructures are institutional and thematic repositories for publications, research data, software and code. Furthermore, the Open Science Infrastructure services range may include discovery, mining, publishing, the peer review process, archiving and preservation, social networking tools, training, high-performance computing, and tools for processing and analysis. Successful Open Science Infrastructure should be based on community values and responsive to needed changes. Preferably the Open Science Infrastructure should be distributed, enabling machine-actionable tools and services, supporting reusability and reproducibility, quality FAIR data, interoperability, sustainability, long-term preservation and funding.
Here, we define, categorize and discuss barriers to data and code sharing that are relevant to many research fields. We explore how real and perceived barriers might be overcome or reframed in the light of the benefits relative to costs. By elucidating these barriers and the contexts in which they arise, we can take steps to mitigate them and align our actions with the goals of open science, both as individual scientists and as a scientific community.
This paper presents findings from a survey on the status quo of data quality assurance practices at research data repositories.
The personalised online survey was conducted among repositories indexed in re3data in 2021. It covered the scope of the repository, types of data quality assessment, quality criteria, responsibilities, details of the review process, and data quality information and yielded 332 complete responses.
The results demonstrate that most repositories perform data quality assurance measures, and overall, research data repositories significantly contribute to data quality. Quality assurance at research data repositories is multifaceted and nonlinear, and although there are some common patterns, individual approaches to ensuring data quality are diverse. The survey showed that data quality assurance sets high expectations for repositories and requires a lot of resources. Several challenges were discovered: for example, the adequate recognition of the contribution of data reviewers and repositories, the path dependence of data review on review processes for text publications, and the lack of data quality information. The study could not confirm that the certification status of a repository is a clear indicator of whether a repository conducts in-depth quality assurance.
We will discuss seven major open data platforms, such as (1) CKAN (2) DKAN (3) Socrata (4) OpenDataSoft (5) GitHub (6) Google datasets (7) Kaggle. We will evaluate the technological commons, techniques, features, methods, and visualization offered by each tool. In addition, why are these platforms important to users such as providers, curators, and end-users? And what are the key options available on these platforms to publish open data?
Mainly building on our own experience as scholars from different research traditions (life sciences, social sciences and humanities), we describe best-practice approaches for opening up research data. We reflect on common barriers and strategies to overcome them, condensed into a step-by-step guide focused on actionable advice in order to mitigate the costs and promote the benefit of open data on three levels at once: society, the disciplines and individual researchers.
In April of this year, Springer Nature and Figshare announced a new integrated route for data deposition at Nature Portfolio titles to help address this problem and encourage researchers to share data rather than seeing it as a hurdle to article publication.
Following the success of the pilot, this streamlined integration is now being extended. Authors submitting to the Nature Portfolio journals, including Nature, in the fields of life, health, chemical and physical sciences will now be able to easily opt into data sharing, via Figshare, as part of one integrated submission process.
The faculty, staff, and graduate students at Clemson University were surveyed by the library about their RDM needs in the spring of 2021. The survey was based on previous surveys from 2012 and 2016 to allow for comparison, but language was updated, and additional questions were added because the field of RDM has evolved. Survey findings indicated that researchers are overall more likely to back up and share their data, but the process of cleaning and preparing the data for sharing was an obstacle. Few researchers reported including metadata when sharing or consulting the library for help with writing a Data Management Plan (DMP). Researchers want RDM resources; offering and effectively marketing those resources will enable libraries to both support researchers and encourage best practices. Understanding researcher needs and offering time-saving services and convenient training options makes following RDM best practices easier for researchers. Outreach and integrated partnerships that support the research life cycle are crucial next steps for ensuring effective data management.
The goal of this research is to provide a theoretical framework that identifies big data curation actions and associated curation challenges. . . . The outcome of the study includes the big data curation framework that provides overview of curation activities and concerns that are essential to perform such activities. The study also provides practical implications for libraries, archives, data repositories and other information organisations that concerns the issue of big data curation as big data presents a multidimensional array of exigencies in relation to the mission of those organisations.
One in five studies declared data were publicly available (59/306, 19%, 95% CI: 15–24%). However, when data availability was investigated this percentage dropped to 16% (49/306, 95% CI: 12–20%), and then to less than 1% (1/306, 95% CI: 0–2%) when data were checked for compliance with key FAIR principles. While only 4% of articles that used inferential statistics reported code to be available (10/274, 95% CI: 2–6%), the odds of reporting code to be available were 5.6 times higher for researchers who shared data.
The Data Literacy Cookbook includes a variety of approaches to and lesson plans for teaching data literacy, from simple activities to self-paced learning modules to for-credit and discipline-specific courses. . . . Many sections have overlapping learning outcomes, so you can combine recipes from multiple sections to whip up a scaffolded curriculum. The Data Literacy Cookbook provides librarians with lesson plans, strategies, and activities to help guide students as both consumers and producers in the data life cycle.
This Data Primer was collaboratively authored by over 30 Digital Humanities researchers and research assistants, and was peer-reviewed by data professionals. It serves as an overview of the different aspects of data curation and management best practices for digital humanities researchers. Endorsed by the National Training Expert Group of the Digital Research Alliance of Canada.
A selection of guides, toolkits, and other resources for librarians working on addressing the NIH Data Management and Sharing Policy.
We introduce the French National 3D Data Repository for Humanities designed for the conservation and the publication of 3D research data in the field of Humanities and Social Sciences. We present the choices made for the data organization, metadata, standards and infrastructure towards a FAIR service.
Digital preservation relies on technological infrastructure (information and communication technology, ICT) that can have environmental impacts. While altering technology usage can reduce the impact of digital preservation practices, this alone is not a strategy for sustainable practice. Moving toward environmentally sustainable digital preservation requires critically examining the motivations and assumptions that shape current practice. The use of scalable cloud infrastructures can reduce the environmental impacts of long-term data preservation solutions.
The Global List of Digitally Endangered Species – The BitList – offers an accessible snapshot of the concerns expressed by the global digital preservation community with respect to the risks faced by diverse types of digital content in varied conditions and contexts. It provides an elementary assessment of the imminence and significance of the dangers faced by different, and at times overlapping classifications of digital materials. By identifying the urgency of action and significance of content, The BitList draws attention to those digital materials that, in the view of the global digital preservation community, require urgent action to remain viable.
The SCN (https://www.oercommons.org/hubs/SCN) is an extension of an earlier, related, effort to create an open textbook about scholarly communication librarianship. That book, Scholarly Communication Librarianship and Open Knowledge, is forthcoming from ACRL in 2023. . . . Even if openly licensed, a book remains a relatively static resource. Scholarly communication is not static at all. Far from it, as many will attest and recognize through hard-won experience. Our contribution is the SCN, an online collection of contributed, modular, open content scoped to scholarly communication topics, which might complement the book or find use independent of it.
Open data platforms are interfaces between data demand of and supply from their users. Yet, data platform providers frequently struggle to aggregate data to suit their users’ needs and to establish a high intensity of data exchange in a collaborative environment. Here, using open life science data platforms as an example for a diverse data structure, we systematically categorize these platforms based on their technology intermediation and the range of domains they cover to derive general and specific success factors for their management instruments. Our qualitative content analysis is based on 39 in-depth interviews with experts employed by data platforms and external stakeholders. We thus complement peer initiatives which focus solely on data quality, by additionally highlighting the data platforms’ role to enable data utilization for innovative output. Based on our analysis, we propose a clearly structured and detailed guideline for seven management instruments. This guideline helps to establish and operationalize data platforms and to best exploit the data provided. Our findings support further exploitation of the open innovation potential in the life sciences and beyond.
In an effort to highlight the significant differences between the 2013 [OSTP] memorandum and the 2022 guidance, the Association of Research Libraries (ARL) has published a comparison table of the two documents. This table breaks down the 2013 and 2022 OSTP public-access guidance into sections for a quick side-by-side comparison of 10 key components, including embargo period, data policies, formats, and metadata expectations.
The purpose of this paper is to explore library research that uses geographic information systems (GIS) as a tool to evaluate library services and resources to ascertain current trends and establish future directions for this growing research area.