Reporting to the Head of Systems, the Critical Systems Analyst, Sr. is charged with serving as the platform owner and administrator for critical platforms and systems as designated by the Head of Systems. The Critical Systems Analyst, Sr. shall be responsible those updates, upgrades, and general maintenance operations on these systems that are not the responsibility of the vendor. It shall be the responsibility of the Critical Systems Analyst, Sr. to keep the Head of Systems informed of any upcoming maintenance on such platforms. The Critical Systems Analyst, Sr. is also charged with collaborating with other departments to provide tech support for changes they wish to make within critical systems while ensuring any changes have gone through the appropriate change management procedure before being instantiated in production.
During career advancement and funding allocation decisions in biomedicine, reviewers have traditionally depended on journal-level measures of scientific influence like the impact factor. Prestigious journals are thought to pursue a reputation of exclusivity by rejecting large quantities of papers, many of which may be meritorious. It is possible that this process could create a system whereby some influential articles are prospectively identified and recognized by journal brands but most influential articles are overlooked. Here, we measure the degree to which journal prestige hierarchies capture or overlook influential science. We quantify the fraction of scientists’ articles that would receive recognition because (a) they are published in journals above a chosen impact factor threshold, or (b) are at least as well-cited as articles appearing in such journals. We find that the number of papers cited at least as well as those appearing in high-impact factor journals vastly exceeds the number of papers published in such venues. At the investigator level, this phenomenon extends across gender, racial, and career stage groupings of scientists. We also find that approximately half of researchers never publish in a venue with an impact factor above 15, which under journal-level evaluation regimes may exclude them from consideration for opportunities. Many of these researchers publish equally influential work, however, raising the possibility that the traditionally chosen journal-level measures that are routinely considered under decision-making norms, policy, or law, may recognize as little as 10-20% of the work that warrants recognition.
One model of open peer review that aligns well with the work of information professionals, particularly those with information literacy instruction duties, is an open peer review podcast. This type of podcast, recorded before a manuscript is submitted for publication, brings an informal peer review process into the open as a host facilitates critical discussion of a research output between the researcher and a reviewer. This approach fosters a supportive community with shared values while utilizing the affordances of podcasting to make invisible labor visible and bring whole personhood into scholarship and scholarly communication.
Investigates whether junior researchers believe that the scholarly communication system is changing in a significant way, whether they have contributed to the changes they envisaged, whether the pandemic has fast-forwarded change and what they thought a transformed system might look like. The data are drawn from the Harbingers-2 project, which investigated the impact of the pandemic on the scholarly communications attitudes and behaviours of early career researchers (ECRs). . . . A majority of ECRs thought that there had been significant changes in the scholarly system, and a large minority thought that the pandemic was responsible. Most of them wanted a system that was more open in terms of open access and open data, with a third taking personal action to bring about change.
A member of the Content and Digital Environments team, you will be responsible for administering UEL’s Research Repository for research outputs, ensuring compliance with deadlines and with copyright and publisher requirements. You will provide training and assistance on Open Access, support compliance with UEL’s open access policy and external policies, playing a key role in preparation for the next Research Excellence Framework (REF).
The emergence of mega-journals (MJs) has influenced scholarly communication. One concrete manifestation of this impact is that more citations have been generated. Citations are the foundation of many evaluation metrics to assess the scientific impact of journals, disciplines, and regions. We focused on searching for citation beneficiaries and quantifying the relative benefit at the journal, discipline and region levels. More specifically, we examined the distribution and contribution to citation-based metrics of citations generated by the five discipline-specific mega-journals (DSMJs) categorized as Environmental Sciences (ES) on Web of Science (WoS) from Clarivate Analytics in 2021: Sustainability, International Journal of Environmental Research and Public Health, Environmental Science and Pollution Research, Journal of Cleaner Production and Science of the Total Environment. Analysis of the distribution of citing data of the five DSMJs shows a pattern with wide coverage but skewness by region and the WoS category; that is, papers in the five DSMJs contributed 26.66% of their citations in 2021 to Mainland China and 22.48% to the ES. Moreover, 15 journals within the ES had their JIFs boosted by more than 20%, benefitting from the high citing rates of the five DSMJs. More importantly, the analysis provides clear evidence that DSMJs can contribute to JIF scores throughout a discipline through their volume of references. Overall, DSMJs can widely impact scholarly evaluation because they contribute citation benefits and improve the evaluation index performance of different scientific entities at different levels. Considering the important application of citation indicators in the academic evaluation system and the increase in citations, it is important to reconsider the real research impact that citations can reflect.
Full-time tenure-track faculty appointment responsible for developing and implementing digital workflows including, but not limited to, born-digital archives, the digitization of existing hard copy materials, and the digital preservation of established electronic records. Coordinates and provides technological support for ARB staff including the ARB website, discovery of digital content, and maintenance of digital collections and exhibitions.
To increase transparency in science, some scholarly journals are publishing peer review reports. But it is unclear how this practice affects the peer review process. Here, we examine the effect of publishing peer review reports on referee behavior in five scholarly journals involved in a pilot study at Elsevier. By considering 9,220 submissions and 18,525 reviews from 2010 to 2017, we measured changes both before and during the pilot and found that publishing reports did not significantly compromise referees’ willingness to review, recommendations, or turn-around times. Younger and non-academic scholars were more willing to accept to review and provided more positive and objective recommendations. Male referees tended to write more constructive reports during the pilot. Only 8.1% of referees agreed to reveal their identity in the published report. These findings suggest that open peer review does not compromise the process, at least when referees are able to protect their anonymity.
However, in the era of artificial intelligence (AI) and big data, a pressing question arises: can an author’s identity be deduced even from an anonymized paper (in cases where the authors do not advertise their submitted article on social media)?
In a recent article we investigate this very question, by leveraging an artificial intelligence model trained on the largest authorship attribution dataset to date. . . . Focusing purely on well-established researchers with at least a few dozen publications, our work demonstrates that reliable author identification is possible.
This is the story of how a publisher and a citation index turned the science communication system into a highly profitable global industry. Over the course of seventy years, academic journal articles have become commodities, and their meta-data a further source of revenue. . . . During the 1950s, two men — Robert Maxwell and Eugene Garfield — begin to experiment with their blueprint for the research economy. Maxwell created an ‘international’ publisher — Pergamon Press — charming the editors of elite, not-for-profit society journals into signing commercial contracts. Garfield invented the science citation index to help librarians manage this growing flow of knowledge. . . . Sixty years later, the global science system has become a citation economy, with academic credibility mediated by the currency produced by the two dominant commercial citation indexes: Elsevier’s Scopus and Clarivates Web of Science. The reach of these citation indexes and their data analytics is amplified by digitisation, computing power and financial investment. . . . Non-Anglophone journals are disproportionately excluded from these indexes, reinforcing the stratification of academic credibility geographies and endangering long established knowledge ecosystems.
Twitter is in turmoil and the scholarly community on the platform is once again starting to migrate. As with the early internet, scholarly organizations are at the forefront of developing and implementing a decentralized alternative to Twitter, Mastodon. Both historically and conceptually, this is not a new situation for the scholarly community. Historically, scholars were forced to leave social media platform FriendFeed after it was bought by Facebook in 2006. Conceptually, the problems associated with public scholarly discourse subjected to the whims of corporate owners are not unlike those of scholarly journals owned by monopolistic corporations: in both cases the perils associated with a public good in private hands are palpable. For both short form (Twitter/Mastodon) and longer form (journals) scholarly discourse, decentralized solutions exist, some of which are already enjoying some institutional support. Here we argue that scholarly organizations, in particular learned societies, are now facing a golden opportunity to rethink their hesitations towards such alternatives and support the migration of the scholarly community from Twitter to Mastodon by hosting Mastodon instances. Demonstrating that the scholarly community is capable of creating a truly public square for scholarly discourse, impervious to private takeover, might renew confidence and inspire the community to focus on analogous solutions for the remaining scholarly record—encompassing text, data and code—to safeguard all publicly owned scholarly knowledge.
Rather than being alarmed or anxious, writers need to understand ChatGPT’s strengths and weaknesses. It is better at structure than it is at content. It is a good brainstorming tool (think titles, outlines, counter-arguments), but you must double check everything it tells you, especially if you’re outside your domain of expertise. It can provide summaries of complex ideas, and connect them with other ideas, but only if you have put a lot of thought into the incremental prompting needed to shift it from its generic default and train it to focus on what you care about. Its access to information is limited to what it was originally trained on, therefore your own training phase is essential to identify gaps and inaccuracies. It can be used for labor, such as reformatting abstracts or reducing the length of sections, but it can’t replace the thinking a writer does to determine why some paragraphs or ideas deserve more words and others can be cut back. It can be inaccurate: in fact, rather stubbornly so, persisting with inaccuracies even after they are pointed out, while at the same time presenting its next attempt as corrected.
It has been argued that preprint coverage during the COVID-19 pandemic constituted a paradigm shift in journalism norms and practices. This study examines whether, in what ways, and to what extent this is the case using a sample of 11,538 preprints posted on four preprint servers—bioRxiv, medRxiv, arXiv, and SSRN—that received coverage in 94 English-language media outlets between 2014-2021. We compared mentions of these preprints with mentions of a comparison sample of 397,446 peer reviewed research articles indexed in the Web of Science to identify changes in the share of media coverage that mentioned preprints before and during the pandemic. We found that preprint media coverage increased at a slow but steady rate pre-pandemic, then spiked dramatically. This increase applied only to COVID-19-related preprints, with minimal or no change in coverage of preprints on other topics. In addition, the rise in preprint coverage was most pronounced among health and medicine-focused media outlets, which barely covered preprints before the pandemic but mentioned more COVID-19 preprints than outlets focused on any other topic. These results suggest that the growth in coverage of preprints seen during the pandemic period may imply a shift in journalistic norms, including a changing outlook on reporting preliminary, unvetted research.
In this paper, we present CORE-GPT, a novel question-answering platform that combines GPT-based language models and more than 32 million full-text open access scientific articles from CORE. We first demonstrate that GPT3.5 and GPT4 cannot be relied upon to provide references or citations for generated text. We then introduce CORE-GPT which delivers evidence-based answers to questions, along with citations and links to the cited papers, greatly increasing the trustworthiness of the answers and reducing the risk of hallucinations. CORE-GPT’s performance was evaluated on a dataset of 100 questions covering the top 20 scientific domains in CORE, resulting in 100 answers and links to 500 relevant articles. The quality of the provided answers and and relevance of the links were assessed by two annotators. Our results demonstrate that CORE-GPT can produce comprehensive and trustworthy answers across the majority of scientific domains, complete with links to genuine, relevant scientific articles.
Citations play an important role in researchers’ careers as a key factor in evaluation of scientific impact. Many anecdotes advice authors to exploit this fact and cite prospective reviewers to try obtaining a more positive evaluation for their submission. In this work, we investigate if such a citation bias actually exists: Does the citation of a reviewer’s own work in a submission cause them to be positively biased towards the submission? In conjunction with the review process of two flagship conferences in machine learning and algorithmic economics, we execute an observational study to test for citation bias in peer review. In our analysis, we carefully account for various confounding factors such as paper quality and reviewer expertise, and apply different modeling techniques to alleviate concerns regarding the model mismatch. Overall, our analysis involves 1,314 papers and 1,717 reviewers and detects citation bias in both venues we consider. In terms of the effect size, by citing a reviewer’s work, a submission has a non-trivial chance of getting a higher score from the reviewer: an expected increase in the score is approximately 0.23 on a 5-point Likert item. For reference, a one-point increase of a score by a single reviewer improves the position of a submission by 11% on average.
Reporting to the Head of Digital Scholarship and Research Data Services, this individual will develop and provide services that support students, faculty and researchers in the discovery, use, preservation, and visualization of data. The individual will coordinate and teach instruction sessions and programming related to research data management and data visualization and will provide consultations for researchers in collaboration with subject librarians.
There is widespread debate on whether to anonymize author identities in peer review. The key argument for anonymization is to mitigate bias, whereas arguments against anonymization posit various uses of author identities in the review process. The Innovations in Theoretical Computer Science (ITCS) 2023 conference adopted a middle ground by initially anonymizing the author identities from reviewers, revealing them after the reviewer had submitted their initial reviews, and allowing the reviewer to change their review subsequently. We present an analysis of the reviews pertaining to the identification and use of author identities. Our key findings are: (I) A majority of reviewers self-report not knowing and being unable to guess the authors’ identities for the papers they were reviewing. (II) After the initial submission of reviews, 7.1% of reviews changed their overall merit score and 3.8% changed their self-reported reviewer expertise. (III) There is a very weak and statistically insignificant correlation of the rank of authors’ affiliations with the change in overall merit; there is a weak but statistically significant correlation with respect to change in reviewer expertise. We also conducted an anonymous survey to obtain opinions from reviewers and authors. The main findings from the 200 survey responses are: (i) A vast majority of participants favor anonymizing author identities in some form. (ii) The “middle-ground” initiative of ITCS 2023 was appreciated. (iii) Detecting conflicts of interest is a challenge that needs to be addressed if author identities are anonymized. Overall, these findings support anonymization of author identities in some form (e.g., as was done in ITCS 2023), as long as there is a robust and efficient way to check conflicts of interest.
Based on the results, researchers should seek out grant funding and generously incorporate literature into their co-authored publications to increase their publications’ potential for future impact. These factors may influence article quality, resulting in more citations over time. Further research is needed to better understand their influence and the influence of other factors.
Featuring interviews with 101 podcasting academics, including scholars and teachers of podcasting, this book explores the motivations of scholarly podcasters, interrogates what podcasting does to academic knowledge, and leads potential podcasters through the creation process from beginning to end. With scholarship often trapped inside expensive journals, wrapped in opaque language, and laced with a standoffish tone, this book analyses the implications of moving towards a more open and accessible form.
The relationship between open access and academic impact (usually measured as citations received from academic publications) has been extensively studied but remains a very controversial topic. However, the effect of open access on policy impact (measured as citations received from policy documents) is still unknown. The purpose of this study was to examine the effect of open access on the policy impact, which might initiate a new controversial topic. . . . Linear regression models, logit regression models, four other matching methods, open access status provided by different databases, and different sizes of data samples were used to check the robustness of the main results. This study revealed that open access had significant and positive effects on the policy impact.
Twitter is in turmoil and the scholarly community on the platform is once again starting to migrate. As with the early internet, scholarly organizations are at the forefront of developing and implementing a decentralized alternative to Twitter, Mastodon. Both historically and conceptually, this is not a new situation for the scholarly community. Historically, scholars were forced to leave social media platform FriendFeed after it was bought by Facebook in 2006. Conceptually, the problems associated with public scholarly discourse subjected to the whims of corporate owners are not unlike those of scholarly journals owned by monopolistic corporations: in both cases the perils associated with a public good in private hands are palpable. For both short form (Twitter/Mastodon) and longer form (journals) scholarly discourse, decentralized solutions exist, some of which are already enjoying some institutional support. Here we argue that scholarly organizations, in particular learned societies, are now facing a golden opportunity to rethink their hesitations towards such alternatives and support the migration of the scholarly community from Twitter to Mastodon by hosting Mastodon instances. Demonstrating that the scholarly community is capable of creating a truly public square for scholarly discourse, impervious to private takeover, might renew confidence and inspire the community to focus on analogous solutions for the remaining scholarly record — encompassing text, data and code — to safeguard all publicly owned scholarly knowledge.
Reporting to the Director for Content Services, the Cataloging and Digital Projects Librarian serves as the primary metadata specialist for the Libraries’ CONTENTdm digital collections, working collaboratively with the department head, the Digital Services Department, the Special Collections Division, and other staff in Content Services to complete individual projects. The librarian also collaborates with the Office of Scholarly Communications and User Services to create metadata for University of Arkansas Department of Music concert recordings in the ScholarWorks (Digital Commons) institutional repository. The librarian harvests ScholarWorks metadata for graduate theses and dissertations for import into WorldCat and Alma. The librarian creates metadata for other types of digital projects and collections as needed and performs original and complex copy cataloging for media materials in a variety of formats.
The dominance of journal impact factors as a proxy for research quality and impact has been challenged, to the extent that academic impacts are being eroded from definitions of research impact all together. It’s one of many bandwagons that seem logical to jump on, but which don’t necessarily hold up under scrutiny. The publishing community needs to demonstrate that it is a following wind, not a headwind.
This spring, Digital Scholarship’s bibliographies in the HTML format were reformatted as single page files with internal navigation. This included all bibliographies that were in HTML format only as well as the HTML versions of paperback books. These new PDFs are in a 12 point font and are designed for printing; however, they also have live links for immediate access. There were no content changes. For a list of all Digital Scholarship publications, see the site map.
The European Journal of Higher Education seeks to pioneer the policy of ‘transparent peer review’ among higher education journals by publishing anonymous peer review reports to demonstrate the rigour of its peer review process. Starting in April 2023, the European Journal of Higher Education will start a pilot policy to publish the peer review report with the published article. Hence, any submission received after the launch of the policy and accepted for publication will at the time of publication include a link to an open access online peer review report including anonymous peer reviews from all rounds of review, while not including the responses of the authors.