"Hacker Plants False Memories in ChatGPT to Steal User Data in Perpetuity"


Within three months of the rollout [of a long-term conversation memory feature], [Johann] Rehberger found that memories could be created and permanently stored through indirect prompt injection, an AI exploit that causes an LLM to follow instructions from untrusted content such as emails, blog posts, or documents. The [security] researcher demonstrated how he could trick ChatGPT into believing a targeted user was 102 years old, lived in the Matrix, and insisted Earth was flat and the LLM would incorporate that information to steer all future conversations.. . .

While OpenAI has introduced a fix that prevents memories from being abused as an exfiltration vector, the researcher said, untrusted content can still perform prompt injections that cause the memory tool to store long-term information planted by a malicious attacker.

https://tinyurl.com/bddcxjj4

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

Federal Reserve Bank of St. Louis: The Rapid Adoption of Generative AI


Figure 2 presents our main results. The first bar shows that 39.4 percent of all August 2024 RPS respondents say that they used generative AI, either at work or at home. About 32 percent of respondents reported using generative AI at least once in the week prior to the survey, while 10.6 percent reported using it every day last week. About 28 percent of employed respondents used generative AI at work in August 2024, with the vast majority (24.1 percent) using it at least once in the last week and 10.9 percent using it daily. Usage outside of work was more common (32.7 percent), but slightly less intensive, with 25.9 percent using it at least once in the last week and 6.4 percent using it every day. Appendix Figure A.1 presents the share of respondents using specific generative AI products. ChatGPT is used most often (28.5 percent), followed by Google Gemini (16.3 percent).

https://tinyurl.com/mfhr6ujr

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

" It Takes a Village A Distributed Training Model for AI-Based Chatbots "


The introduction of Large Language Models (LLM) to the chatbot landscape has opened intriguing possibilities for academic libraries to offer more responsive and institutionally contextualized support to users, especially outside of regular service hours. While a few academic libraries currently employ AI-based chatbots on their websites, this service has not yet become the norm and there are no best practices in place for how academic libraries should launch, train, and assess the usefulness of a chatbot. In summer 2023, staff from the University of Delaware’s Morris Library information technology (IT) and reference departments came together in a unique partnership to pilot a low-cost AI-powered chatbot called UDStax. The goals of the pilot were to learn more about the campus community’s interest in engaging with this tool and to better understand the labor required on the staff side to maintain the bot. After researching six different options, the team selected Chatbase, a subscription-model product based on ChatGPT 3.5 that provides user-friendly training methods for an AI model using website URLs and uploaded source material. Chatbase removed the need to utilize the OpenAI API directly to code processes for submitting information to the AI engine to train the model, cutting down the amount of work for library information technology and making it possible to leverage the expertise of reference librarians and other public-facing staff, including student workers, to distribute the work of developing, refining, and reviewing training materials. This article will discuss the development of prompts, leveraging of existing data sources for training materials, and workflows involved in the pilot. It will argue that, when implementing AI-based tools in the academic library, involving staff from across the organization is essential to ensure buy-in and success. Although chatbots are designed to hide the effort of the people behind them, that labor is substantial and needs to be recognized.

https://tinyurl.com/3y654j2r

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

" Responsible AI Practice in Libraries and Archives: A Review of the Literature "


Artificial intelligence (AI) has the potential to positively impact library and archives collections and services—enhancing reference, instruction, metadata creation, recommendations, and more. However, AI also has ethical implications. This paper presents an extensive literature and review analysis that examines AI projects implemented in library and archives settings, asking the following research questions: RQ1: How is artificial intelligence being used in libraries and archives practice? RQ2: What ethical concerns are being identified and addressed during AI implementation in libraries and archives? The results of this literature review show that AI implementation is growing in libraries and archives and that practitioners are using AI for increasingly varied purposes. We found that AI implementation was most common in large, academic libraries. Materials used in AI projects usually involved digitized and born digital text and images, though materials also ranged to include web archives, electronic theses and dissertations (ETDs), and maps. AI was most often used for metadata extraction and reference and research services. Just over half of the papers included in the literature review mentioned ethics or values related issues in their discussions of AI implementation in libraries and archives, and only one-third of all resources discussed ethical issues beyond technical issues of accuracy and human-in-the-loop. Case studies relating to AI in libraries and archives are on the rise, and we expect subsequent discussions of relevant ethics and values to follow suit, particularly growing in the areas of cost considerations, transparency, reliability, policy and guidelines, bias, social justice, user communities, privacy, consent, accessibility, and access. As AI comes into more common usage, it will benefit the library and archives professions to not only consider ethics when implementing local projects, but to publicly discuss these ethical considerations in shared documentation and publications.

https://tinyurl.com/2t6ykuyv

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"Clarivate Launches Generative AI-Powered Primo Research Assistant"


Key features include:

  • Semantic search and natural language queries: Users can interact with the system using everyday language, making the search process more intuitive.
  • AI-powered answers with references to sources used: The tool provides immediate answers based on the top five abstracts, with links to the full text and the complete result list.
  • Search suggestions: The assistant offers suggestions to help users expand their topics and delve deeper into their research.
  • Non-English query support: Users can ask questions and receive answers in multiple non-English languages.

https://tinyurl.com/bdcnbku3

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"Academic Writing in the Age of AI: Comparing the Reliability of ChatGPT and Bard with Scopus and Web of Science"


ChatGPT and Bard (now known as Gemini) are becoming indispensable resources for researchers, academicians and diverse stakeholders within the academic landscape. At the same time, traditional digital tools such as scholarly databases continue to be widely used. Web of Science and Scopus are the most extensive academic databases and are generally regarded as consistently reliable scholarly research resources. With the increasing acceptance of artificial intelligence (AI) in academic writing, this study focuses on understanding the reliability of the new AI models compared to Scopus and Web of Science. The study includes a bibliometric analysis of green, sustainable and ecological buying behaviour, covering the period from 1 January 2011 to 21 May 2023. These results are used to compare the results from the AI and the traditional scholarly databases on several parameters. Overall, the findings suggest that AI models like ChatGPT and Bard are not yet reliable for academic writing tasks. It appears to be too early to depend on AI for such tasks.

https://doi.org/10.1016/j.jik.2024.100563

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"Introducing OpenAI o1-preview: A New Series of Reasoning Models for Solving Hard Problems"


In our tests, the next model update performs similarly to PhD students on challenging benchmark tasks in physics, chemistry, and biology. We also found that it excels in math and coding. In a qualifying exam for the International Mathematics Olympiad (IMO), GPT-4o [the last model] correctly solved only 13% of problems, while the [new]reasoning model scored 83%. Their coding abilities were evaluated in contests and reached the 89th percentile in Codeforces competitions. . . .

As an early model, it doesn’t yet have many of the features that make ChatGPT useful, like browsing the web for information and uploading files and images. For many common cases GPT-4o will be more capable in the near term.

https://tinyurl.com/5ap6p996

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

Paywall: "Reshaping Academic Library Information Literacy Programs in the Advent of ChatGPT and Other Generative AI Technologies"


This article reports on three digital information literacy initiatives created by instruction librarians to support students’ use of generative AI technologies, namely ChatGPT, in academic library research. The cumulative and formative data gathered from the initiatives reveals a continuing need for academic libraries to provide information literacy instruction that guides students toward the ethical use of information and awareness of using generative AI tools in library research.

https://doi.org/10.1080/10875301.2024.2400132

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"The AI-Copyright Trap"


As AI tools proliferate, policy makers are increasingly being called upon to protect creators and the cultural industries from the extractive, exploitative, and even existential threats posed by generative AI. In their haste to act, however, they risk running headlong into the Copyright Trap: the mistaken conviction that copyright law is the best tool to support human creators and culture in our new technological reality (when in fact it is likely to do more harm than good). It is a trap in the sense that it may satisfy the wants of a small group of powerful stakeholders, but it will harm the interests of the more vulnerable actors who are, perhaps, most drawn to it. Once entered, it will also prove practically impossible to escape. I identify three routes in to the copyright trap in current AI debates: first is the “if value, then (property) right” fallacy; second is the idea that unauthorized copying is inherently wrongful; and third is the resurrection of the starving artist trope to justify copyright’s expansion.

https://tinyurl.com/bdett6ue

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"Datacenters to Emit 3X More Carbon Dioxide Because of Generative AI"


The datacenter industry is set to emit 2.5 billion tonnes of greenhouse gas (GHG) emissions worldwide between now and the end of the decade, three times more than if generative AI had not been developed.

https://tinyurl.com/4vatmm8a

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"Clarivate Report Unveils the Transformative Role of Artificial Intelligence on Shaping the Future of the Library"


The report combines feedback from a survey of more than 1,500 librarians from across the world with qualitative interviews, covering academic, national and public libraries. In addition to the downloadable report, the accompanying microsite’s dynamic and interactive data visualizations enable rapid comparative analyses according to regions and library types. . . .

Key findings of the report include:

  • Most libraries have an AI plan in place, or one in progress: Over 60% of respondents are evaluating or planning for AI integration.
  • AI adoption is the top tech priority: AI-powered tools for library users and patrons top the list of technology priorities for the next 12 months, according to 43% of respondents.
  • AI is advancing library missions: Key goals for those evaluating or implementing AI include supporting student learning (52%), research excellence (47%) and content discoverability (45%), aligning closely with the mission of libraries.
  • Librarians see promise and pitfalls in AI adoption: 42% believe AI can automate routine tasks, freeing librarians for strategic and creative activities. Levels of optimism vary regionally.
  • AI skills gaps and shrinking budgets are top concerns. Lack of expertise and budget constraints are seen as greater challenges than privacy and security issues: — Shrinking budgets: Almost half (47%) cite shrinking budgets as their greatest challenge. — Skills gap: 52% of respondents see upskilling as AI’s biggest impact on employment, yet nearly a third (32%) state that no training is available.
  • AI advancement will be led by IT: By combining the expertise of heads of IT with strategic investment and direction from senior leadership, libraries can move from consideration to implementation of AI in the coming years.
  • Regional priorities differ: Librarians’ views on other key topics such as sustainability, diversity, open access and open science show notable regional diversity.

https://tinyurl.com/9azeessa

Pulse of the Library report

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"The AI Copyright Hype: Legal Claims That Didn’t Hold Up"


Over the past year, two dozen AI-related lawsuits and their myriad infringement claims have been winding their way through the court system. None have yet reached a jury trial. While we all anxiously await court rulings that can inform our future interaction with generative AI models, in the past few weeks, we are suddenly flooded by news reports with titles such as “US Artists Score Victory in Landmark AI Copyright Case,” “Artists Land a Win in Class Action Lawsuit Against A.I. Companies,” “Artists Score Major Win in Copyright Case Against AI Art Generators”—and the list goes on. The exuberant mood in these headlines mirror the enthusiasm of people actually involved in this particular case (Andersen v. Stability AI). The plaintiffs’ lawyer calls the court’s decision “a significant step forward for the case.” “We won BIG,” writes the plaintiff on X.

In this blog post, we’ll explore the reality behind these headlines and statements. The “BIG” win in fact describes a portion of the plaintiffs’ claims surviving a pretrial motion to dismiss. If you are already familiar with the motion to dismiss per Federal Rules of Civil Procedure Rule 12(b)(6), please refer to Part II to find out what types of claims have been dismissed early on in the AI lawsuits.

https://tinyurl.com/rhmzkr8y

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"AI Models Collapse When Trained on Recursively Generated Data"


Yet, although current LLMs. . ., including GPT-3, were trained on predominantly human-generated text, this may change. If the training data of most future models are also scraped from the web, then they will inevitably train on data produced by their predecessors. In this paper, we investigate what happens when text produced by, for example, a version of GPT forms most of the training dataset of following models. . . .

Model collapse is a degenerative process affecting generations of learned generative models, in which the data they generate end up polluting the training set of the next generation. Being trained on polluted data, they then mis-perceive reality. . . .

In our work, we demonstrate that training on samples from another generative model can induce a distribution shift, which—over time—causes model collapse. This in turn causes the model to mis-perceive the underlying learning task. To sustain learning over a long period of time, we need to make sure that access to the original data source is preserved and that further data not generated by LLMs remain available over time. The need to distinguish data generated by LLMs from other data raises questions about the provenance of content that is crawled from the Internet: it is unclear how content generated by LLMs can be tracked at scale. One option is community-wide coordination to ensure that different parties involved in LLM creation and deployment share the information needed to resolve questions of provenance. Otherwise, it may become increasingly difficult to train newer versions of LLMs without access to data that were crawled from the Internet before the mass adoption of the technology or direct access to data generated by humans at scale.

https://doi.org/10.1038/s41586-024-07566-y

See also: “When A.I.’s Output Is a Threat to A.I. Itself.”

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"Artificial Intelligence Assisted Curation of Population Groups in Biomedical Literature "


Curation of the growing body of published biomedical research is of great importance to both the synthesis of contemporary science and the archiving of historical biomedical literature. Each of these tasks has become increasingly challenging given the expansion of journal titles, preprint repositories and electronic databases. Added to this challenge is the need for curation of biomedical literature across population groups to better capture study populations for improved understanding of the generalizability of findings. To address this, our study aims to explore the use of generative artificial intelligence (AI) in the form of large language models (LLMs) such as GPT-4 as an AI curation assistant for the task of curating biomedical literature for population groups. We conducted a series of experiments which qualitatively and quantitatively evaluate the performance of OpenAI’s GPT-4 in curating population information from biomedical literature. Using OpenAI’s GPT-4 and curation instructions, executed through prompts, we evaluate the ability of GPT-4 to classify study ‘populations’, ‘continents’ and ‘countries’ from a previously curated dataset of public health COVID-19 studies.

Using three different experimental approaches, we examined performance by: A) evaluation of accuracy (concordance with human curation) using both exact and approximate string matches within a single experimental approach; B) evaluation of accuracy across experimental approaches; and C) conducting a qualitative phenomenology analysis to describe and classify the nature of difference between human curation and GPT curation. Our study shows that GPT-4 has the potential to provide assistance in the curation of population groups in biomedical literature. Additionally, phenomenology provided key information for prompt design that further improved the LLM’s performance in these tasks. Future research should aim to improve prompt design, as well as explore other generative AI models to improve curation performance. An increased understanding of the populations included in research studies is critical for the interpretation of findings, and we believe this study provides keen insight on the potential to increase the scalability of population curation in biomedical studies.

https://doi.org/10.2218/ijdc.v18i1.950

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"NVIDIA: Copyrighted Books Are Just Statistical Correlations to Our AI Models"


Earlier this year, several authors sued NVIDIA over alleged copyright infringement. The class action lawsuit alleged that the company’s AI models were trained on copyrighted works and specifically mentioned Books3 data [a database of over 180,000 pirated books]. Since this happened without permission, the rightsholders demand compensation. . . .

The company believes that AI companies should be allowed to use copyrighted books to train their AI models, as these books are made up of “uncopyrightable facts and ideas” that are already in the public domain. . . .

“[AI] Training measures statistical correlations in the aggregate, across a vast body of data, and encodes them into the parameters of a model. Plaintiffs do not try to claim a copyright over those statistical correlations, asserting instead that the training data itself is ‘copied’ for the purposes of infringement,” NVIDIA writes [to the court hearing the case].

According to NVIDIA, the lawsuit boils down to two related questions. First, whether the authors’ direct infringement claim is essentially an attempt to claim copyright on facts and grammar. Second, whether making copies of the books is fair use.

https://tinyurl.com/mpa6e8jj

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"Artists Claim ‘Big’ Win in Copyright Suit Fighting AI Image Generators"


In an order on Monday, US district judge William Orrick denied key parts of motions to dismiss from Stability AI, Midjourney, Runway AI, and DeviantArt. The court will now allow artists to proceed with discovery on claims that AI image generators relying on Stable Diffusion violate both the Copyright Act and the Lanham Act, which protects artists from commercial misuse of their names and unique styles. . . .

While Orrick agreed with Midjourney that “plaintiffs have no protection over ‘simple, cartoony drawings’ or ‘gritty fantasy paintings,'” artists were able to advance a “trade dress” claim under the Lanham Act, too.

https://tinyurl.com/yd27cvar

"Trade Dress Infringement"

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery"


One of the grand challenges of artificial general intelligence is developing agents capable of conducting scientific research and discovering new knowledge. While frontier models have already been used as aids to human scientists, e.g. for brainstorming ideas, writing code, or prediction tasks, they still conduct only a small part of the scientific process. This paper presents the first comprehensive framework for fully automatic scientific discovery, enabling frontier large language models to perform research independently and communicate their findings. We introduce The AI Scientist, which generates novel research ideas, writes code, executes experiments, visualizes results, describes its findings by writing a full scientific paper, and then runs a simulated review process for evaluation. In principle, this process can be repeated to iteratively develop ideas in an open-ended fashion, acting like the human scientific community. We demonstrate its versatility by applying it to three distinct subfields of machine learning: diffusion modeling, transformer-based language modeling, and learning dynamics. Each idea is implemented and developed into a full paper at a cost of less than $15 per paper. To evaluate the generated papers, we design and validate an automated reviewer, which we show achieves near-human performance in evaluating paper scores. The AI Scientist can produce papers that exceed the acceptance threshold at a top machine learning conference as judged by our automated reviewer. This approach signifies the beginning of a new era in scientific discovery in machine learning: bringing the transformative benefits of AI agents to the entire research process of AI itself, and taking us closer to a world where endless affordable creativity and innovation can be unleashed on the world’s most challenging problems.

https://arxiv.org/abs/2408.06292

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"Wiley and Oxford University Press Confirm AI Partnerships as Cambridge University Press Offers ‘Opt-In’"


Wiley and Oxford University Press (OUP) told The Bookseller they have confirmed AI partnerships, with the availability of opt-ins and remuneration for authors appearing to vary. . . .

Meanwhile, Cambridge University Press has said it is talking to authors about opt ins along with ‘fair remuneration’ before making any deals.

Hachette, HarperCollins, and Pan Macmillan have not made AI deals.

https://tinyurl.com/bdzax5sk

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"What Happens When Your Publisher Licenses Your Work for AI Training?"


In a lot of cases, yes, publishers can license AI training rights without asking authors first. Many publishing contracts include a full and broad grant of rights–sometimes even a full transfer of copyright to the publisher for them to exploit those rights and to license the rights to third parties. . . .

Not all publishing contracts are so broad, however. For example, in the Model Publishing Contract for Digital Scholarship (which we have endorsed), the publisher’s sublicensing rights are limited and specifically defined, and profits resulting from any exploitation of a work must be shared with authors. . . .

There are lots of variations, and specific terms matter. Some publisher agreements are far more limited–transferring only limited publishing and subsidiary rights. . . .

This is further complicated by the fact that authors sometimes are entitled to reclaim their rights, such as by rights reversion clause and copyright termination. . . .

We [the Authors Alliance] think it is certainly reasonable to be skeptical about the validity of blanket licensing schemes between large corporate rights holders and AI companies, at least when they are done at very large scale. Even though in some instances publishers do hold rights to license AI training, it is dubious whether they actually hold, and sufficiently document, all of the purported rights of all works being licensed for AI training.

https://tinyurl.com/53fnj9h7

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"AI’s Future in Grave Danger from Nvidia’s Chokehold on Chips, Groups Warn"


Nvidia is currently “the world’s most valuable public company,” their letter said, worth more than $3 trillion after taking near-total control of the high-performance AI chip market. Particularly “astonishing,” the letter said, was Nvidia’s dominance in the market for GPU accelerator chips, which are at the heart of today’s leading AI.

According to the advocacy groups that strongly oppose Big Tech monopolies, Nvidia “now holds an 80 percent overall global market share in GPU chips and a 98 percent share in the data center market.” This “puts it in a position to crowd out competitors and set global pricing and the terms of trade,” the letter warned. . . .

https://tinyurl.com/y5c769nk

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"European Artificial Intelligence Act Comes into Force"


The AI Act introduces a forward-looking definition of AI, based on a product safety and risk-based approach in the EU:

Minimal risk: Most AI systems, such as AI-enabled recommender systems and spam filters, fall into this category. These systems face no obligations under the AI Act due to their minimal risk to citizens’ rights and safety. Companies can voluntarily adopt additional codes of conduct.

Specific transparency risk: AI systems like chatbots must clearly disclose to users that they are interacting with a machine. Certain AI-generated content, including deep fakes, must be labelled as such, and users need to be informed when biometric categorisation or emotion recognition systems are being used. In addition, providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated.

High risk: AI systems identified as high-risk will be required to comply with strict requirements, including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy, and cybersecurity. Regulatory sandboxes will facilitate responsible innovation and the development of compliant AI systems. Such high-risk AI systems include for example AI systems used for recruitment, or to assess whether somebody is entitled to get a loan, or to run autonomous robots.

Unacceptable risk: AI systems considered a clear threat to the fundamental rights of people will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will, such as toys using voice assistance encouraging dangerous behaviour of minors, systems that allow ‘social scoring’ by governments or companies, and certain applications of predictive policing. In addition, some uses of biometric systems will be prohibited, for example emotion recognition systems used at the workplace and some systems for categorising people or real time remote biometric identification for law enforcement purposes in publicly accessible spaces (with narrow exceptions).

https://tinyurl.com/32jy9pat

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"AI and the Workforce: Industry Report Calls for Reskilling and Upskilling as 92 Percent of Technology Roles Evolve"


"The Transformational Opportunity of AI on ICT Jobs" report finds that 92 percent of jobs analyzed are expected to undergo either high or moderate transformation due to advancements in AI.

Led by Cisco, created by Consortium members, and analyzed by Accenture, the new report identifies essential trainings in AI literacy, data analytics and prompt engineering for workers seeking to adapt to the AI revolution.

The AI-Enabled ICT Workforce Consortium consists of Cisco, Accenture, Eightfold, Google, IBM, Indeed, Intel, Microsoft and SAP. Advisors include the American Federation of Labor and Congress of Industrial Organizations, CHAIN5, Communications Workers of America, DIGITALEUROPE, the European Vocational Training Association, Khan Academy and SMEUnited.

https://tinyurl.com/3hj8ypx2

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"Is the AI Bubble about to Pop? Internal Documents Reveal OpenAI May Go Bankrupt within 12 Months"


Net losses for 2024 alone are expected to hit US$5 billion. . . .

The company spends US$7 billion on training its GPT models, with additional US$1.5 billion in staffing expenses.

It makes back anywhere between US$3.5 to US$4.5 billion in ChatGPT subscriptions and access fees. . .

https://tinyurl.com/y8hen3ep

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |

"Copyright Office Releases Part 1 of Artificial Intelligence Report, Recommends Federal Digital Replica Law"


Today, the U.S. Copyright Office is releasing Part 1 of its Report on the legal and policy issues related to copyright and artificial intelligence (AI), addressing the topic of digital replicas. This Part of the Report responds to the proliferation of videos, images, or audio recordings that have been digitally created or manipulated to realistically but falsely depict an individual. Given the gaps in existing legal protections, the Office recommends that Congress enact a new federal law that protects all individuals from the knowing distribution of unauthorized digital replicas. The Office also offers recommendations on the elements to be included in crafting such a law. . . .

The Report is being released in several Parts, beginning today. Forthcoming Parts will address the copyrightability of materials created in whole or in part by generative AI, the legal implications of training AI models on copyrighted works, licensing considerations, and the allocation of any potential liability.

https://tinyurl.com/yc2fhthm

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |