“Experimental Evidence of the Effects of Large Language Models Versus Web Search on Depth of Learning”


A theory is proposed that when individuals learn about a topic from LLM syntheses, they risk developing shallower knowledge than when they learn through standard web search, even when the core facts in the results are the same. This shallower knowledge accrues from an inherent feature of LLMs—the presentation of results as summaries of vast arrays of information rather than individual search links—which inhibits users from actively discovering and synthesizing information sources themselves, as in traditional web search. . . . Results from seven online and laboratory experiments (n = 10,462) lend support for these predictions, and confirm, for example, that participants reported developing shallower knowledge from LLM summaries even when the results were augmented by real-time web links. Implications of the findings for recent research on the benefits and risks of LLMs, as well as limitations of the work, are discussed.

https://doi.org/10.1093/pnasnexus/pgaf316

| Artificial Intelligence |
| Research Data Curation and Management Works |
| Digital Curation and Digital Preservation Works |
| Open Access Works |
| Digital Scholarship |