“AI Policies in U.S. Universities: A Critical Analysis of Policy Gaps and Library Involvement”


This posIT column critically examines AI policies and resources at 50 four-year universities—one from each U.S. state—to assess alignment with the Association of Research Libraries’ (ARL) Guiding Principles for Artificial Intelligence. Through content analysis of LibGuides, AI taskforce membership, campus events, and public-facing policies, the study reveals widespread adoption of AI resources but a significant lack of clarity, consistency, and librarian involvement in policy development.

https://doi.org/10.1080/01930826.2025.2560268

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

“Prestige over Merit: An Adapted Audit of LLM (Large Language Models) Bias in Peer Review”


Large language models (LLMs) are playing an increasingly integral, though largely informal, role in scholarly peer review. Yet it remains unclear whether LLMs reproduce the biases observed in human decision-making. We adapt a resume-style audit to scientific publishing, developing a multi-role LLM simulation (editor/reviewer) that evaluates a representative set of high-quality manuscripts across the physical, biological, and social sciences under randomized author identities (institutional prestige, gender, race). The audit reveals a strong and consistent institutional-prestige bias: identical papers attributed to low-prestige affiliations face a significantly higher risk of rejection, despite only modest differences in LLM-assessed quality. To probe mechanisms, we generate synthetic CVs for the same author profiles; these encode large prestige-linked disparities and an inverted prestige-tenure gradient relative to national benchmarks. The results suggest that both domain norms and prestige-linked priors embedded in training data shape paper-level outcomes once identity is visible, converting affiliation into a decisive status cue.

https://arxiv.org/abs/2509.15122

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

STM: Recommendations for a Classification of AI Use in Academic Manuscript Preparation


This document presents a classification of various ways that artificial intelligence (AI) can be used to assist in the preparation of academic manuscripts. It is intended to serve as a framework for publishers to individually develop policies on how AI may be used and should be declared by authors.

https://tinyurl.com/4v66r38p

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

“ARL/CNI Futurescape Libraries AI Toolkit Can Help You Thrive in the AI Landscape”


The Futurescape Libraries AI Toolkit integrates the ARL/CNI AI Scenarios along with priorities trialed and refined by strategic thinkers working in the research library field during a Strategic Implications forum in December 2024. . . .

Organized into flexible modules, the toolkit offers structured activities to help library leadership teams, staff, and external stakeholders:

  • Explore future possibilities
  • Test current strategies
  • Identify opportunities and vulnerabilities
  • Build readiness for long-term change

https://tinyurl.com/bdss8hac

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

“Keywords Are Not Always the Key: A Metadata Field Analysis for Natural Language Search on Open Data Portals”


Open data portals are essential for providing public access to open datasets. However, their search interfaces typically rely on keyword-based mechanisms and a narrow set of metadata fields. This design makes it difficult for users to find datasets using natural language queries. The problem is worsened by metadata that is often incomplete or inconsistent, especially when users lack familiarity with domain-specific terminology. In this paper, we examine how individual metadata fields affect the success of conversational dataset retrieval and whether LLMs can help bridge the gap between natural queries and structured metadata. We conduct a controlled ablation study using simulated natural language queries over real-world datasets to evaluate retrieval performance under various metadata configurations. We also compare existing content of the metadata field ‘description’ with LLM-generated content, exploring how different prompting strategies influence quality and impact on search outcomes. Our findings suggest that dataset descriptions play a central role in aligning with user intent, and that LLM-generated descriptions can support effective retrieval. These results highlight both the limitations of current metadata practices and the potential of generative models to improve dataset discoverability in open data portals.

https://arxiv.org/abs/2509.14457

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

“Implementing AI in Library-Led Programs to Foster Critical Information Literacy”


The spread of fake news and misinformation poses significant challenges to the integrity of information ecosystems, undermining public trust. Libraries, traditionally trusted sources of credible information, are in a unique position to address this issue through the integration of artificial intelligence (AI). This paper explores the potential of AI to detect misinformation and enhance critical information literacy. AI technologies like natural language processing and machine learning can analyze text patterns, verify sources, and identify fake news at scale. Tools such as fact-checking algorithms and real-time content monitoring systems can help librarians curate reliable resources and guide users in distinguishing credible information from misinformation. AI can also be employed to promote critical information literacy through personalized educational experiences, including chatbots and virtual assistants that offer on-demand guidance on evaluating information. Ethical considerations play a crucial role in AI implementation. The paper addresses concerns over biases in AI algorithms, data privacy, and the ethics of automated decision-making. Strategies for mitigating these risks include prioritizing transparency, accountability, and user-centered design. By upholding ethical standards, libraries can align AI use with their core mission of serving the public good. The study also highlights the practical challenges libraries face in adopting AI, such as resource constraints, staff training, and system integration. Case studies from pioneering institutions offer insights into overcoming these barriers. Libraries can implement AI to combat misinformation and foster critical information literacy while maintaining ethical principles. This approach strengthens libraries’ roles in ensuring informed, equitable access to information and positions them as key players in the fight against fake news.

https://doi.org/10.20944/preprints202509.1281.v1

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

“Chatbot Assessment: Best Practices for Artificial Intelligence in the Library”

In November 2019, the Leonard Lief Library implemented Ivy.ai, a proprietary chatbot on its website. This implementation was the first academic library installation of a vendor-supplied chatbot to be discussed in the professional literature. This chatbot functioned as a new tool that assisted users seeking information from the library website. User questions provided insight to the authors about the kinds of topics students searched for via the library website. In April 2023, the chatbot’s vendor began using OpenAI’s ChatGPT Application Programming Interface (API) to improve the chatbot’s functionality. This change, from a rules-based chatbot system to a transformer model, enhanced the chatbot’s ability to provide answers to patrons. To better understand this major change, the authors assessed the chatbot’s usage during the Spring 2023 semester. This assessment revealed the kinds of questions the chatbot struggled to answer, and possible reasons why. The assessment’s findings demonstrated how chatbots can successfully function as a enhancement to the library website. The article also presents best practices for libraries looking to implement or experiment with chatbots and contributes to the ongoing discussion of artificial intelligence in libraries.

https://tinyurl.com/5cmzhm3w

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

“IEEE Launches Pilot with Hum’s Alchemist Review”


Alchemist Review performs comprehensive manuscript analysis, automatically identifying crucial research elements including primary hypotheses, methodological approaches, and claimed contributions. Additionally, the platform leverages Grounded AI’s citation verification technology to ensure reference accuracy and detect potential retractions or contextual inconsistencies.

https://tinyurl.com/paekxz3p

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

“Redefining Research: Elsevier Announces Next-Generation AI-Powered Researcher Solution”


What will set the new solution for researchers apart:

  • One seamless assistant: Brainstorm ideas, plan projects, review literature, find collaborators, and discover funding opportunities – all in one space with a powerful AI assistant.
  • Trust Cards: Showing how evidence was used or inferred, highlighting confidence levels and providing risk assessments for potential inaccuracies.
  • Certified content only: Access comprehensive, peer-reviewed, cross-publisher academic content.
  • Curated datasets: Answers powered by publisher-neutral datasets e.g. Scopus abstracts and funding data.
  • Add your own content: Users can add their own content to supplement what is already included.
  • Privacy and security: Built with enterprise-grade security, Elsevier AI-powered solutions are developed in line with its Privacy Principles to safeguard personal data and privacy.
  • Publisher-neutral algorithms: An independent Advisory Board will be created to ensure results are prioritized and ranked based on quality, in a transparent, unbiased and responsible manner.

https://tinyurl.com/y576n7ku

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

“Reviewers Increasingly Divided on the Use of Generative AI in Peer Review”


A new global reviewer survey from IOP Publishing (IOPP) reveals a growing divide in attitudes among reviewers in the physical sciences regarding the use of generative AI in peer review. . . .

Key Findings:

  • 41% of respondents now believe generative AI will have a positive impact on peer review (up 12% from 2024), while 37% see it as negative (up 2%). Only 22% are neutral or unsure—down from 36% last year—indicating growing polarisation in views.
  • 32% of researchers have already used AI tools to support them with their reviews.
  • 57% would be unhappy if a reviewer used generative AI to write a peer review report on a manuscript they had co-authored and 42% would be unhappy if AI were used to augment a peer review report.
  • 42% believe they could accurately detect an AI-written peer review report on a manuscript they had co-authored.

https://tinyurl.com/32294sd5

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

“How Much Are LLMs Changing the Language of Academic Papers After ChatGPT? A Multi-Database and Full Text Analysis”


This study investigates how Large Language Models (LLMs) are influencing the language of academic papers by tracking 12 LLM-associated terms across six major scholarly databases (Scopus, Web of Science, PubMed, PubMed Central (PMC), Dimensions, and OpenAlex) from 2015 to 2024. Using over 2.4 million PMC open-access publications (2021-July 2025), we also analysed full texts to assess changes in the frequency and co-occurrence of these terms before and after ChatGPT’s initial public release. Across databases, delve (+1,500%), underscore (+1,000%), and intricate (+700%) had the largest increases between 2022 and 2024. Growth in LLM-term usage was much higher in STEM fields than in social sciences and arts and humanities. In PMC full texts, the proportion of papers using underscore six or more times increased by over 10,000% from 2022 to 2025, followed by intricate (+5,400%) and meticulous (+2,800%). Nearly half of all 2024 PMC papers using any LLM term also included underscore, compared with only 3%-14% of papers before ChatGPT in 2022. Papers using one LLM term are now much more likely to include other terms. For example, in 2024, underscore strongly correlated with pivotal (0.449) and delve (0.311), compared with very weak associations in 2022 (0.032 and 0.018, respectively). These findings provide the first large-scale evidence based on full-text publications and multiple databases that some LLM-related terms are now being used much more frequently and together. The rapid uptake of LLMs to support scholarly publishing is a welcome development reducing the language barrier to academic publishing for non-English speakers.

https://arxiv.org/abs/2509.09596

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

“Pay-per-Output? AI Firms Blindsided by Beefed up robots.txt Instructions.”


Announced Wednesday morning, the “Really Simply Licensing” (RSL) standard evolves robots.txt instructions by adding an automated licensing layer that’s designed to block bots that don’t fairly compensate creators for content.

Free for any publisher to use starting today, the RSL standard is an open, decentralized protocol that makes clear to AI crawlers and agents the terms for licensing, usage, and compensation of any content used to train AI, a press release noted.

https://tinyurl.com/mrxjmdvw

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

“Paper2Agent: Reimagining Research Papers as Interactive and Reliable AI Agents”


We introduce Paper2Agent, an automated framework that converts research papers into AI agents. Paper2Agent transforms research output from passive artifacts into active systems that can accelerate downstream use, adoption, and discovery. Conventional research papers require readers to invest substantial effort to understand and adapt a paper’s code, data, and methods to their own work, creating barriers to dissemination and reuse. Paper2Agent addresses this challenge by automatically converting a paper into an AI agent that acts as a knowledgeable research assistant. It systematically analyzes the paper and the associated codebase using multiple agents to construct a Model Context Protocol (MCP) server, then iteratively generates and runs tests to refine and robustify the resulting MCP. These paper MCPs can then be flexibly connected to a chat agent (e.g. Claude Code) to carry out complex scientific queries through natural language while invoking tools and workflows from the original paper. We demonstrate Paper2Agent’s effectiveness in creating reliable and capable paper agents through in-depth case studies. Paper2Agent created an agent that leverages AlphaGenome to interpret genomic variants and agents based on ScanPy and TISSUE to carry out single-cell and spatial transcriptomics analyses. We validate that these paper agents can reproduce the original paper’s results and can correctly carry out novel user queries. By turning static papers into dynamic, interactive AI agents, Paper2Agent introduces a new paradigm for knowledge dissemination and a foundation for the collaborative ecosystem of AI co-scientists.

https://arxiv.org/abs/2509.06917

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

“Can AI Become an Information Literacy Ally? A Survey of Library Instructor Perspectives on ChatGPT”


Libraries can play a role in navigating the Artificial Intelligence (AI) era by integrating these tools into information literacy (IL) programs. To implement generative AI tools like ChatGPT effectively, it is important to understand the attitudes of library professionals involved in IL instruction toward this tool and their intention to use it for instruction. This study explored perceptions of ChatGPT using survey data that included acceptance factors and potential uses derived from the emerging literature. While some librarians saw potential, others found it too unreliable to be useful; however, the vast majority imagined utilizing the tool in the future

https://crl.acrl.org/index.php/crl/article/view/26938

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

“Better than a Google Search? Effectiveness of Generative AI Chatbots as Information Seeking Tools in Law, Health Sciences, and Library and Information Sciences”


This study investigates the source citation practices of five widely available chatbots-ChatGPT, Copilot, DeepSeek, Gemini, and Perplexity-across three academic disciplines-law, health sciences, and library and information sciences. . . . Results reveal major differences between chatbots, which cite consistently different numbers of sources, with Perplexity and DeepSeek citing more and Copilot providing fewer, as well as between disciplines, where health sciences questions yield more scholarly source citations and law questions are more likely to yield blog and professional website citations. Paywalled sources and discipline-specific literature such as case law or systematic reviews are rarely retrieved.

https://dx.doi.org/10.2139/ssrn.5402185

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

Paywall: “AILIS 1.0: A New Framework to Measure AI Literacy in Library and Information Science (LIS)”


Functioning, Ethics, and Evaluation emerged as core dimensions of AI literacy. Functioning scores correlated strongly with all other dimensions except self-assessed Usage. Overall, library professionals outperformed students, particularly in Ethics and Usage. However, students, especially first-years, reported higher self-efficacy despite lower performance, indicating a tendency to overestimate their AI literacy, as confirmed by focus groups.

https://doi.org/10.1016/j.acalib.2025.103118

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

“What Do Librarians Look Like? Stereotyping of a Profession by Generative AI”


The analysis revealed significant biases in the generated images, with a predominant depiction of librarians as Caucasian. Gender representation overstated the presence of men in all libraries, most notably in academic libraries with only 6% of academic librarians depicted as female. Additionally, there was a noticeable trend towards older librarians in public and academic settings, and the size of library buildings increased from school to academic environments. These findings highlight the reinforcement of stereotypes and the misrepresentation of authority dynamics, particularly the portrayal of men in positions of power relative to female colleagues.

https://doi.org/10.1177/09610006251357286

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

Paywall: “Which AI Tools Work Best for Research? Using Librarian and Student Perspectives to Inform a Rating Rubric”


Building on previously published frameworks, this study introduces a rubric-based approach to assessing AI tools across three key areas: information discovery, search, and reviews. Notably, Undermind and the paid version of Elicit emerged as top performers.

https://doi.org/10.1080/15424065.2025.2546052

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

The Hitchhiker’s Guide to Autonomous Research: A Survey of Scientific Agents”


The advancement of LLM-based agents is redefining AI for Science (AI4S) by enabling autonomous scientific research. Prominent LLMs exhibited expertise across multiple domains, catalysing constructions of domain-specialised scientific agents. Nevertheless, the profound epistemic and methodological gaps between AI and the natural sciences still impede the systematic design, training, and validation of these agents. This survey bridges the existing gap by presenting an exhaustive blueprint for scientific agents, spanning systematic construction methodologies, targeted capability enhancement, and rigorous evaluations. Anchored in the canonical scientific workflow, this paper (i) pinpoints the overview of scientific agents, starting with the development from general-purpose agents to scientific agents driven by articulated goal-orientation, then subsequently advancing a comprehensive taxonomy that organises existing agents by construction strategy and capability scope, and (ii) introduces a two-tier progressive framework, from scientific agents contrustion from scratch to targeted capability enhancement, for realizing autonomous scientific research. It is our aspiration that this survey will serve as guidance for researchers across various domains, facilitating the systematic design of domain-specific scientific agents and stimulating further innovation in AI-driven scientific research. To support long-term progress, we curate a live repository (AWESOME_SCIENTIFIC_AGENT) that continuously aggregates emerging methods, benchmarks, and best practices.

https://doi.org/10.36227/techrxiv.175459840.02185500/v1

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

AI Across America: Attitudes on AI Usage, Job Impact, and Federal Regulation

  • Artificial intelligence has reached a tipping point in American society: half of U.S. adults (50%) report using at least one major AI tool. State-level adoption is widespread, with every state except West Virginia (33%) reporting usage levels of at least 40%.
  • Expectations of workplace disruption are nearly universal, with substantial majorities across all 50 states anticipating AI will impact their jobs within five years, suggesting that Americans recognize AI as a transformative force that will reshape the economy and society. In every single state, the percentage of people who are concerned about too little regulation outweighs those who worry about too much regulation.
  • Yet, with more than one-third remaining uncertain about appropriate regulatory approaches, Americans have not formed settled views on AI governance. Regulatory attitudes vary geographically, but they do not follow the nation’s usual red-blue divide.
  • Geographic patterns reveal coastal knowledge economy hubs like California, New York, and Massachusetts, along with Sun Belt states such as Texas, Georgia, and Florida, leading in anticipated workplace AI impact, while agricultural Corn Belt and Rust Belt regions from Iowa to West Virginia report lower expectations.
  • The data expose deep demographic fault lines, with younger, more educated, higher-income Americans driving AI adoption while rural, older, and lower-income populations lag substantially behind.
  • ChatGPT dominates the AI landscape with 65% recognition and 37% usage rates, but a consistent pattern emerges across all AI tools: awareness significantly outpaces actual usage of the tools, and everyday frequent usage remains concentrated among a small fraction of users.

https://tinyurl.com/2s4zkuru

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

AI Openness: A Primer for Policymakers


This paper explores the concept of openness in artificial intelligence (AI), including relevant terminology and how different degrees of openness can exist. It explains why the term “open source” – a term rooted in software – does not fully capture the complexities specific to AI. This paper analyses current trends in open-weight foundation models using experimental data, illustrating both their potential benefits and associated risks. It incorporates the concept of marginality to further inform this discussion. By presenting information clearly and concisely, the paper seeks to support policy discussions on how to balance the openness of generative AI foundation models with responsible governance.

https://tinyurl.com/mpva5s47

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

“Open Science Falling Behind in the Era of Artificial Intelligence”


Generative Artificial Intelligence (AI) refers to a new generation of content generation technologies that emerged after the rise of Transformer architecture in 2017, characterized by its core technical features of “compute-intensive architecture, model-driven paradigm, and data closed-loop system” (Table 1). AI is accelerating scientific discoveries and reshaping the research process, propelling AI for science toward becoming a novel research paradigm. There is a pressing demand for open science due to these advancements, yet the development of open science lags considerably behind the AI era. This disparity arises from the loss of academic leadership and insufficient motivation to pursue openness within the industrial sector, which could hinder AI empowerment and scientific innovation. Effective intervention by the public sector and policymakers becomes crucial when the “invisible hand” fails.

https://doi.org/10.3389/frma.2025.1595824

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

“AI in Scholarly Publishing: A Study on LIS Journals’Guidelines and Policies”


As Artificial Intelligence (AI) technologies, particularly generative AI, become increasingly prevalent, their adoption in academic settings has also grown. Researchers are using these tools for tasks such as grammar correction, statistical analysis, and manuscript preparation. However, this shift raises concerns regarding bias, copyright, reproducibility, and research transparency. This article investigates the current state of transparency around generative AI use in Library and Information Science (LIS) journals. Using a list of LIS journals compiled by Nixon, the authors reviewed publishing guidelines and policies to identify any statements or requirements related to the use of generative AI in manuscript submission, peer review, or editing. Descriptive statistics were used to summarize the findings, including frequency and percentage by publisher. The study also examined whether journals with AI-related policies differ in impact factor from those without. Finally, the article discusses the ethical considerations, benefits, and the need for standardized declarations of generative AI use in LIS publishing.

https://doi.org/10.23974/ijol.2025.vol10.2.419

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

“Does ChatGPT Ignore Article Retractions and Other Reliability Concerns?”


Large language models (LLMs) like ChatGPT seem to be increasingly used for information seeking and analysis, including to support academic literature reviews. To test whether the results might sometimes include retracted research, we identified 217 retracted or otherwise concerning academic studies with high altmetric scores and asked ChatGPT 4o-mini to evaluate their quality 30 times each. Surprisingly, none of its 6510 reports mentioned that the articles were retracted or had relevant errors, and it gave 190 relatively high scores (world leading, internationally excellent, or close). The 27 articles with the lowest scores were mostly accused of being weak, although the topic (but not the article) was described as controversial in five cases (e.g., about hydroxychloroquine for COVID-19). In a follow-up investigation, 61 claims were extracted from retracted articles from the set, and ChatGPT 4o-mini was asked 10 times whether each was true. It gave a definitive yes or a positive response two-thirds of the time, including for at least one statement that had been shown to be false over a decade ago. The results therefore emphasise, from an academic knowledge perspective, the importance of verifying information from LLMs when using them for information seeking or analysis.

https://doi.org/10.1002/leap.2018

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

“Becoming a Leader in AI Literacy Instruction by Not Reinventing the Wheel”


This article showcases how one research-intensive university library leveraged existing strengths—through AI-focused guides, workshops, grants, and cross-campus partnerships—to embed AI literacy across its academic community. Rather than reinventing the wheel, the library expanded proven methods to support ethical, critical, and informed engagement with AI technologies.

https://doi.org/10.1016/j.acalib.2025.103117

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |