This paper presents findings from a survey on the status quo of data quality assurance practices at research data repositories.
The personalised online survey was conducted among repositories indexed in re3data in 2021. It covered the scope of the repository, types of data quality assessment, quality criteria, responsibilities, details of the review process, and data quality information and yielded 332 complete responses.
The results demonstrate that most repositories perform data quality assurance measures, and overall, research data repositories significantly contribute to data quality. Quality assurance at research data repositories is multifaceted and nonlinear, and although there are some common patterns, individual approaches to ensuring data quality are diverse. The survey showed that data quality assurance sets high expectations for repositories and requires a lot of resources. Several challenges were discovered: for example, the adequate recognition of the contribution of data reviewers and repositories, the path dependence of data review on review processes for text publications, and the lack of data quality information. The study could not confirm that the certification status of a repository is a clear indicator of whether a repository conducts in-depth quality assurance.
As part of the DRI’s core staff, the Training and Engagement Manager has the responsibility for delivering and continually developing DRI’s skills and training programme, building and engaging the DRI end-user base (i.e. students, researchers and the general public), managing DRI’s publications, managing the DRI’s outreach programme of educational events, and supporting the digital preservation and access work of DRI’s members. The position is based at the Digital Repository of Ireland at the Royal Irish Academy.
The absolute safest thing to do, to shield your own personal assets, is register a LLC (limited liability company), get a separate bank account in the name of the LLC, transfer any assets and liabilities (donations you receive / bills you pay) to the LLC, and get insurance in the name of the LLC. This is obviously complete overkill for anyone who’s running a really small server, especially because the annual fees for LLC registration are likely to exceed whatever amount your users chip in, but if you’re running an open-registration server or you exceed 20-30k users, or you have a lot of personal assets, you should think hard about it and talk to a lawyer.
The person in this role plays the lead role in supporting, assessing, enhancing, and connecting the library systems that form the backbone of the library’s digital infrastructure. The successful candidate will collaborate with colleagues to improve the user experience of the library’s public facing systems, implement new systems and services, and enhance and improve internal library workflows.
We will discuss seven major open data platforms, such as (1) CKAN (2) DKAN (3) Socrata (4) OpenDataSoft (5) GitHub (6) Google datasets (7) Kaggle. We will evaluate the technological commons, techniques, features, methods, and visualization offered by each tool. In addition, why are these platforms important to users such as providers, curators, and end-users? And what are the key options available on these platforms to publish open data?
Mainly building on our own experience as scholars from different research traditions (life sciences, social sciences and humanities), we describe best-practice approaches for opening up research data. We reflect on common barriers and strategies to overcome them, condensed into a step-by-step guide focused on actionable advice in order to mitigate the costs and promote the benefit of open data on three levels at once: society, the disciplines and individual researchers.
The Electronic Resources Librarian for Acquisitions (ELRA) is responsible for the management of acquisitions workflows for new one-time and continuing electronic resources. The incumbent will facilitate the ordering and renewal process, including verifying vendors, order creation and invoicing.
In practice, this means that Mastodon users can interact and follow users on other instances . . . . It makes for a web of social networks where users can find and follow each other without having to set up new accounts on each new service. . . . users on Mastodon could follow Tumblr users’ posts from their own Mastodon instance — without having to use the Tumblr app.
The Head of the Digital Scholarship and Initiatives (DSI) department is a dynamic and forward-thinking leader who can provide vision for the DSI. DSI is comprised of specialists that support digital collections, digital scholarship, and the digital repository and reports to the Associate University Librarian for Academic Services.
In recent years, United States federal funding agencies, including the National Institutes of Health (NIH) and the National Science Foundation (NSF), have implemented public access policies to make research supported by funding from these federal agencies freely available to the public. Enforcement is primarily through annual and final reports submitted to these funding agencies, where all peer-reviewed publications must be registered through the appropriate mechanism as required by the specific federal funding agency. Unreported and/or incorrectly reported papers can result in delayed acceptance of annual and final reports and even funding delays for current and new research grants. So, it’s important to make sure every peer-reviewed publication is reported properly and in a timely manner. For large collaborative research efforts, the tracking and proper registration of peer-reviewed publications along with generation of accurate annual and final reports can create a large administrative burden. With large collaborative teams, it is easy for these administrative tasks to be overlooked, forgotten, or lost in the shuffle. In order to help with this reporting burden, we have developed the Academic Tracker software package, implemented in the Python 3 programming language and supporting Linux, Windows, and Mac operating systems. Academic Tracker helps with publication tracking and reporting by comprehensively searching major peer-reviewed publication tracking web portals, including PubMed, Crossref, ORCID, and Google Scholar, given a list of authors. Academic Tracker provides highly customizable reporting templates so information about the resulting publications is easily transformed into appropriate formats for tracking and reporting purposes. The source code and extensive documentation is hosted on GitHub (https://moseleybioinformaticslab.github.io/academic_tracker/) and is also available on the Python Package Index (https://pypi.org/project/academic_tracker) for easy installation.
Now is a good time to take steps to lock down your Twitter account, grab what data you can, review where you’re using Twitter to sign in to other online services, and delete anything you’d rather not live on a site that may be on its last legs. Taking these steps could protect you from identity theft or private messages being made public
For basic security, instances will employ transport-layer encryption, keeping your connection to the server you’ve chosen private. This will keep your communications safe from local eavesdroppers using your same WiFi connection, but it does not protect your communications, including your direct messages, from the server or instance you’ve chosen—or, if you’re messaging someone from a different instance, the server they’ve chosen. This includes the moderators and administrators of those instances, as well. Just like Twitter or Instagram, your posts and direct messages are accessible by those running the services. But unlike Twitter or Instagram, you have the choice in what server or instance you trust with your communications. . . . Two-factor authentication with an app or security key is available on Mastodon instances, giving users an extra security check to log on. The software also offers robust privacy controls: allowing users to set up automatic deletion of old posts, set personalized keyword filters, approve followers, and hide your social graph (the list of your followers and those you follow). Unfortunately, there is no analogue to making your account "private. . . . Mastodon users can mute, block, or report other users. Muting and blocking works just as you’d expect: it’s a list associated with your account that just stops the content of that user from appearing in your feed and prevents them from reaching out to you, respectively."
Anna’s Archive is basically a meta-search engine that can find content from third-party ‘pirate’ sources. . . . The Z-Library links rely on the Tor version of the site, which remains online. However, the goal is to ultimately make all content available through IPFS [InterPlanetary File System] as well. This would make it pretty much impossible to take down, similar to the Library Genesis forks, which also use IPFS.
This position serves as a Digital Collection Specialist and is located within the Digital Content Management Section, Digital Collections Management and Services Division, Digital Services Directorate, Discovery and Preservation Services, within the Library Collections and Services Group at the Library of Congress.
You have probably just read the Provost’s announcement that we are suspending our negotiations with Elsevier for the remainder of this year. We did not make this decision lightly. Our Elsevier contract represents more than one-fifth of our entire collections budget at OSU, and we know that this decision will be disruptive. . . .Our primary strategy will be article-level fulfillment. We will build on our already outstanding Interlibrary Loan service (ILL), and add some additional tools that should improve those workflows and provide a more seamless user experience. . . . In the summer of 2023 we will develop a timeline and goals for access to Elsevier content in 2024. At that point, we will be looking to secure access to a curated list of titles, informed by the assessment I described above, and by the ongoing conversations we have been having with our OSU community about open and sustainable scholarly communication.
At $2.6M per year and an annual 2.5% increase, the Elsevier journal package is the most expensive annual expenditure for the University of Washington (UW) Libraries. For context, the total UW Libraries collections budget for the Seattle campus is approximately $16 million, and we spend about $13 million on ongoing subscriptions. Immediate access to 2,500 Elsevier journal titles published in the current year represent about 15% of the Libraries annual collections budget. . . .The Elsevier journal package reinforces the scholarly publishing model based on paywalls and rationing of access, inequitable opportunities for publishing, and excessive pricing and annual price increases that undermines a scholarly ecosystem where the open sharing of knowledge is critical to accelerating change for the public good. . . .As a result, the Libraries will be unable to maintain immediate access for all titles in our current list of 2,500 Elsevier journal titles on ScienceDirect. There is no choice but to begin identifying which journals need to be available for immediate access to meet patient care needs as well as long term use for research, teaching, and learning. The Libraries will continue to provide faculty, students and staff access to published articles through alternative access options such as PubMed Central, Google Scholar, and interlibrary loan — most requested articles are delivered within a few hours or business days.
Supervise a minimum of two full time staff and one term position: Web Services Librarian, Digital Library Applications Developer, and Mellon Project Manager. . . . Provide technical and project leadership for digital products and partnerships including the institutional repository (ScholarWorks), discovery service (EDS), library services platform (FOLIO), digital asset management system (Five College Compass, using Islandora 2), and digital preservation and shared metadata schema (Stronger Together).
The mechanisms through which this network status can be exchanged into academic advantage are not straightforward, but any academic who has achieved a degree of popularity online can attest to the direct and indirect advantages which this has brought to their career.. . . What if that capital is now worthless? It’s a strange position that has the potential to leave academics clinging on to their Twitter accounts long after the beneficial impact of the platform has evaporated in a mushroom cloud of moving fast and breaking things. The collapse of Twitter would be a significant event within higher education, analogous to (though not on the same scale as) citational rankings being reset overnight.
How is the new publishing model similar to or different from older publishing models based on preprints combined with peer review (e.g. Copernicus, F1000)? There are three main differences. 1) Peer review and assessment at eLife continues to be organised by an editorial team made up of academic experts and led by an Editor-in-Chief, Deputy Editors, Senior Editors, and a Board of Reviewing Editors via a consultative peer-review model already known as one of the most constructive for authors in the industry. 2) The addition of an eLife assessment is a further crucial part of our model, distinctive from what others are doing—it is a key addition to our public peer reviews and it enables readers to understand the context of the work, the significance of the research and the strength of the evidence. 3) We are no longer making accept/reject decisions based on peer review—authors will choose if and when to produce a Version of Record at any point following the review process.
The Digital Publishing Coordinator will be an integral part of the VIVA program, supporting the VIVA Open Grant program and publishing activities of grant recipients, including developing a wide range of awarded digital publishing projects across institutions of higher education in Virginia. Working closely with the Open and Sustainable Learning Coordinator and the Assessment and E-Resources Program Analyst, this position is responsible for the project management, performance, and completion of VIVA supported digital publishing projects. This includes managing up to $500,000 annually in outsourced publishing services, depending on program and project needs.
"While we are heartened by the takedown and the resulting reduction in harm to authors, we are not unsympathetic to the plight of those college and other students who have perhaps felt forced to resort to such illegal pirate websites and other free sources of textbooks to help them manage the extremely high cost of higher education," Rasenberger [Authors Guild CEO] said. "However, these students’ anger is misdirected. The exorbitant cost of education should not be borne by authors and publishers but by the universities, and it should not be used to justify reliance on foreign criminals for textbooks or to trivialize the immense personal and economic harm Z-Library was causing authors who are trying to make a living under increasingly difficult and hostile economic circumstances."
Under the direction of the Director of Collection Strategies, the Open Access Collection Strategist: Evaluates opportunities and makes recommendations for redirecting the Library’s investment in collections to resources with the greatest potential for transforming the system of scholarly communication toward open dissemination of research. . . . Manages the UCSB Open Access (OA) Publishing Fund. Monitors publisher- and vendor-provided information and data to identify open access-related trends and patterns. Implements the Library’s scholarly communication program in collaboration with the Scholarly Communication and Open Access Standing Committee (SCOASC), including outreach, programming, and communication strategy related to monitoring and building awareness of the changes occurring in academic publishing to foster free and open access to research. Participates in UCSB campus outreach activities and informational campaigns to raise awareness of UCSB local open access initiatives and UC-system-wide open access transformative agreements.
So just to summarize, there are two facts that are often overlooked when we discuss how university presses generally recover the costs of publishing their frontlist of new titles and how they might finance open access for monographs:
- A very large portion of a university press’s sales are not to academic libraries. Libraries are key to a university press’s overall success, and our model doesn’t work without them, but our model also depends on other revenue sources;
- Most of a university press’s annual revenues derive not from sales of new books, but from sales of previously published titles collectively known as the "backlist," which are generally those titles that were published more than twelve months ago. The sales of these titles may adversely be impacted by the availability of open access formats as readers transition to digital.
Reporting to the Dean of University Libraries, the Associate Dean works with a team of senior leaders, department directors, and staff to provide overall vision and leadership for the Libraries’ Digital Infrastructure, Web Development, and Digital Research and Scholarship departments. These departments include 20 full-time employees working on a range of dynamic and exciting projects, including development and maintenance of a hybrid digital infrastructure comprised of both open-source platforms and vendor-hosted solutions, increasing emphasis on migrating legacy applications to the cloud, and a wide range of programs and services to support emerging research needs on campus.
Twitter’s ubiquity, its adoption by nearly a quarter of a billion users in the last 16 years, and its status as a de facto public archive, has made it a gold mine of information, says Thomas [senior analyst at the Institute for Strategic Dialogue].
"In one sense, this actually represents an enormous opportunity for future historians—we’ve never had the capacity to capture this much data about any previous era in history," she explains. But that enormous scale presents a huge storage problem for organizations.
For eight years, the US Library of Congress took it upon itself to maintain a public record of all tweets, but it stopped in 2018, instead selecting only a small number of accounts’ posts to capture. "It never, ever worked," says William Kilbride, executive director of the Digital Preservation Coalition. The data the library was expected to store was too vast, the volume coming out of the firehose too great. "Let me put that in context: it’s the Library of Congress. They had some of the best expertise on this topic. If the Library of Congress can’t do it, that tells you something quite important," he says.