"How the Scientific Community Reacts to Newly Submitted Preprints: Article Downloads, Twitter Mentions, and Citations"

Xin Shuai, Alberto Pepe, Johan Bollen have self-archived "How the Scientific Community Reacts to Newly Submitted Preprints: Article Downloads, Twitter Mentions, and Citations" in arXiv.org.

Here's an excerpt:

We analyze the online response of the scientific community to the preprint publication of scholarly articles. We employ a cohort of 4,606 scientific articles submitted to the preprint database arXiv.org between October 2010 and April 2011. We study three forms of reactions to these preprints: how they are downloaded on the arXiv.org site, how they are mentioned on the social media site Twitter, and how they are cited in the scholarly record. We perform two analyses. First, we analyze the delay and time span of article downloads and Twitter mentions following submission, to understand the temporal configuration of these reactions and whether significant differences exist between them. Second, we run correlation tests to investigate the relationship between Twitter mentions and both article downloads and article citations. We find that Twitter mentions follow rapidly after article submission and that they are correlated with later article downloads and later article citations, indicating that social media may be an important factor in determining the scientific impact of an article.

| Transforming Scholarly Publishing through Open Access: A Bibliography| Digital Scholarship Publications Overview |

"Evaluating Repository Annual Metrics for SCONUL"

Gareth James Johnson has self-archived "Evaluating Repository Annual Metrics for SCONUL" in the Leicester Research Archive.

Here's an excerpt:

This report is a summarisation of the responses to a recent survey of the UKCoRR membership concerning the use of full-text downloads as a repository performance metric within the SCONUL annual statistical survey. It hopes to present a representative snapshot of the current opinions in this area from repository managers.

| Digital Scholarship | Digital Scholarship Publications Overview | Institutional Repository Bibliography |

"Bibliometrics: A New Feature for Institutional Repositories"

Merceur Frederic, Le Gall Morgane, Salaun Annick have self-archived "Bibliometrics: A New Feature for Institutional Repositories" in Archimer.

Here's an excerpt:

In addition to its promotion and conservation objectives, Archimer, Ifremer’s institutional repository, offers a wide range of bibliometric tools described in this document.

As early as the recording stage, numerous automatic operations homogenize the information (author’s name, research body, department…), thus proving the quality of the bibliometric analyses.

Now, Archimer enables, among others, the automatic calculation of several indicators defined by Ifremer and the different ministries in charge in the framework of its four-year contract. It also offers various criteria aimed at analysing its document production (eg. distribution of the value of the journals' impact factors, evolution of the number of quotations in other publications, presentation of international collaborations…).

| Digital Scholarship | Digital Scholarship Publications Overview | Institutional Repository Bibliography |

"Citation Advantage of Open Access Legal Scholarship"

James M. Donovan and Carol A. Watson have self-archived "Citation Advantage of Open Access Legal Scholarship" in UKnowledge.

Here's an excerpt:

To date, there have been no studies focusing exclusively on the impact of open access on legal scholarship. We examine open access articles from three journals at the University of Georgia School of Law and confirm that legal scholarship freely available via open access improves an article's research impact. Open access legal scholarship – which today appears to account for almost half of the output of law faculties – can expect to receive 50% more citations than non-open access writings of similar age from the same venue.

| Digital Scholarship | Digital Scholarship Publications Overview | Reviews of Digital Scholarship Publications |

"Self-Selected or Mandated, Open Access Increases Citation Impact for Higher Quality Research"

Yassine Gargouri et al. have published "Self-Selected or Mandated, Open Access Increases Citation Impact for Higher Quality Research" in PLoS ONE.

Here's an excerpt:

Background

Articles whose authors have supplemented subscription-based access to the publisher's version by self-archiving their own final draft to make it accessible free for all on the web (“Open Access”, OA) are cited significantly more than articles in the same journal and year that have not been made OA. Some have suggested that this “OA Advantage” may not be causal but just a self-selection bias, because authors preferentially make higher-quality articles OA. To test this we compared self-selective self-archiving with mandatory self-archiving for a sample of 27,197 articles published 2002-2006 in 1,984 journals.

Methodology/Principal Findings

The OA Advantage proved just as high for both. Logistic regression analysis showed that the advantage is independent of other correlates of citations (article age; journal impact factor; number of co-authors, references or pages; field; article type; or country) and highest for the most highly cited articles. The OA Advantage is real, independent and causal, but skewed. Its size is indeed correlated with quality, just as citations themselves are (the top 20% of articles receive about 80% of all citations).

Conclusions/Significance

The OA advantage is greater for the more citable articles, not because of a quality bias from authors self-selecting what to make OA, but because of a quality advantage, from users self-selecting what to use and cite, freed by OA from the constraints of selective accessibility to subscribers only. It is hoped that these findings will help motivate the adoption of OA self-archiving mandates by universities, research institutions and research funders.

| Digital Scholarship |

"OpenAccess Statistics: Alternative Impact Measures for Open Access Documents?"

Ulrich Herb has self-archived "OpenAccess Statistics: Alternative Impact Measures for Open Access Documents? An Examination How to Generate Interoperable Usage Information from Distributed Open Access Services" in E-LIS.

Here's an excerpt:

Publishing and bibliometric indicators are of utmost relevance for scientists and research institutions as the impact or importance of a publication (or even of a scientist or an institution) is mostly regarded to be equivalent to a citation-based indicator, e.g. in form of the Journal Impact Factor or the Hirsch-Index. Both on an individual and an institutional level performance measurement depends strongly on these impact scores. This contribution shows that most common methods to assess the impact of scientific publications often discriminate Open Access publications — and by that reduce the attractiveness of Open Access for scientists. Assuming that the motivation to use Open Access publishing services (e.g. a journal or a repository) would increase if these services would convey some sort of reputation or impact to the scientists, alternative models of impact are discussed. Prevailing research results indicate that alternative metrics based on usage information of electronic documents are suitable to complement or to relativize citation-based indicators. Furthermore an insight into the project OpenAccess-Statistics OA-S is given. OA-S implemented an infrastructure to collect document-related usage information from distributed Open Access Repositories in an aggregator service in order to generate interoperable document access information according to three standards (COUNTER, LogEc and IFABC). The service also guarantees the deduplication of users and identical documents on different servers. In a second phase it is not only planned to implement added services like recommender features, but also to evaluate alternative impact metrics based on usage patterns of electronic documents.

"Self-Selected or Mandated, Open Access Increases Citation Impact for Higher Quality Research"

Yassine Gargouri, Chawki Hajjem, Vincent Lariviere, Yves Gingras, Tim Brody, Les Carr, Stevan Harnad have self-archived "Self-Selected or Mandated, Open Access Increases Citation Impact for Higher Quality Research" in the ECS EPrints Repository

Here's an excerpt:

Articles whose authors make them Open Access (OA) by self-archiving them online are cited significantly more than articles accessible only to subscribers. Some have suggested that this "OA Advantage" may not be causal but just a self-selection bias, because authors preferentially make higher-quality articles OA. To test this we compared self-selective self-archiving with mandatory self-archiving for a sample of 27,197 articles published 2002-2006 in 1,984 journals. The OA Advantage proved just as high for both. Logistic regression showed that the advantage is independent of other correlates of citations (article age; journal impact factor; number of co-authors, references or pages; field; article type; country or institution) and greatest for the most highly cited articles. The OA Advantage is real, independent and causal, but skewed. Its size is indeed correlated with quality, just as citations themselves are (the top 20% of articles receive about 80% of all citations). The advantage is greater for the more citeable articles, not because of a quality bias from authors self-selecting what to make OA, but because of a quality advantage, from users self-selecting what to use and cite, freed by OA from the constraints of selective accessibility to subscribers only. [See accompanying RTF file for responses to feedback. Four PDF files provide Supplementary Analysis.]

"Worldwide Use and Impact of the NASA Astrophysics Data System Digital Library"

Michael J. Kurtz et al. have self-archived "Worldwide Use and Impact of the NASA Astrophysics Data System Digital Library" in arXiv.org.

Here's the abstract:

By combining data from the text, citation, and reference databases with data from the ADS readership logs we have been able to create Second Order Bibliometric Operators, a customizable class of collaborative filters which permits substantially improved accuracy in literature queries. Using the ADS usage logs along with membership statistics from the International Astronomical Union and data on the population and gross domestic product (GDP) we develop an accurate model for world-wide basic research where the number of scientists in a country is proportional to the GDP of that country, and the amount of basic research done by a country is proportional to the number of scientists in that country times that country's per capita GDP.

We introduce the concept of utility time to measure the impact of the ADS/URANIA and the electronic astronomical library on astronomical research. We find that in 2002 it amounted to the equivalent of 736 FTE researchers, or $250 Million, or the astronomical research done in France. Subject headings: digital libraries; bibliometrics; sociology of science; information retrieval

"Open Access Archiving and Article Citations within Health Services and Policy Research"

Devon Greyson et al. have self-archived "Open Access Archiving and Article Citations within Health Services and Policy Research" in E-LIS.

Here's an excerpt:

Promoting uptake of research findings is an objective common to those who fund, produce and publish health services and policy research. Open access (OA) is one method being employed to maximize impact. OA articles are online, free to access and use. This paper contributes to growing body of research exploring the “OA advantage” by employing an article-level analysis comparing citation rates for articles drawn from the same, purposively selected journals. We used a two-stage analytic approach designed to test whether OA is associated with (1) likelihood that an article is cited at all and (2) total number citations that an article receives, conditional on being cited at least once. Adjusting for potential confounders: number of authors, time since publication, journal, and article subject, we found that OA archived articles were 60% more likely to be cited at least once, and, once cited, were cited 29% more than non-OA articles.

Hindawi’s Open Access Journals’ Impact Factor Up over 27%

Hindawi's open access journals' average impact factor is up over 27% in the last year.

Here's an excerpt from the press release on liblicense-l:

Hindawi Publishing Corporation is pleased to announce that it has seen very strong growth in the Impact Factors of its journals in the recently released 2008 Journal Citation Report published by Thomson Scientific. This most recent Journal Citation Report shows the average Impact Factor of Hindawi's journals increasing by more than 27% over the past year, with two of Hindawi's largest journals, EURASIP Journal on Advances in Signal Processing and Mathematical Problems in Engineering, rising by 70% and 45% respectively. . . .

In addition to the 14 journals that were included in the 2007 Journal Citation Report, three of Hindawi's journals received Impact Factors for the first time this year: Clinical and Developmental Immunology, EURASIP Journal on Wireless Communications and Networking, and Journal of Nanomaterials.

E-Journals: Their Use, Value and Impact

The Research Information Network has released E-Journals: Their Use, Value and Impact.

Here's an excerpt from the announcement:

The report was undertaken by the Centre for Information Behaviour and the Evaluation of Research (CIBER) at University College London for the RIN to provide a detailed analysis of how academic researchers in the UK have responded to the provision of e-journals, and how this has shaped their information seeking behaviour and their usage of e-journals. The project looked at:

  • investigating researchers behaviour: looking at levels and patterns of use, the content viewed and how they navigate to it
  • finding out how researchers' behaviours may vary by subjects and disciplines, and the type of university they study at
  • gathering and analysing evidence of relationships between researchers' behaviour and institutional spending on e-journals, and
  • gathering and analysing evidence of relations between researchers' behaviour and research productivity, outputs, including number of publications produced, citations attracted and the results of research evaluation.

2007 Impact Factors for PLoS Journals Released

The Public Library of Science has reported the 2007 impact factors for its journals as calculated by Thomson Reuters:

  • PLoS Biology: 13.5
  • PLoS Medicine: 12.6
  • PLoS Computational Biology: 6.2
  • PLoS Genetics: 8.7
  • PLoS Pathogens: 9.3

Here's an excerpt from the press release:

As we and others have frequently pointed out, impact factors should be interpreted with caution and only as one of a number of measures which provide insight into a journal’s, or rather its articles’, impact. Nevertheless, the 2007 figures for PLoS Biology and PLoS Medicine are consistent with the many other indicators (e.g. submission volume, web statistics, reader and community feedback) that these journals are firmly established as top-flight open-access general interest journals in the life and health sciences respectively.

The increases in the impact factors for the discipline-based, community-run PLoS journals also tally with indicators that these journals are going from strength to strength. For example, submissions to PLoS Computational Biology, PLoS Genetics and PLoS Pathogens have almost doubled over the past year—each journal now routinely receives 80-120 submissions per month of which around 20-25 are published. . . .

Although Thomson is yet to index our two youngest journals, other indexing databases are. The subscription-only Scopus citation index (owned by Elsevier and, incidentally, including many more journals than Thomson’s offering) is already covering PLoS ONE (though so far, only as far back as June 2007). But authors don’t need to rely on subscription-only indexes such as those owned by Thomson and Elsevier, and can instead use the freely-available Google Scholar. Using Google Scholar, for example, one can find that the article by Neal Fahlgren and coauthors, about the cataloguing of an important class of RNA in plants and one of the most highly cited PLoS ONE articles so far has been cited 42 times—strong evidence that good research, even if published in a new journal, will rapidly find its place in the scientific record when it’s made freely available to all.

Citation Statistics Report Released

The International Mathematical Union in cooperation with the International Council of Industrial and Applied Mathematics and the Institute of Mathematical Statistics have released Citation Statistics.

Here's an excerpt from the Executive Summary:

This is a report about the use and misuse of citation data in the assessment of scientific research. The idea that research assessment must be done using "simple and objective" methods is increasingly prevalent today. The "simple and objective" methods are broadly interpreted as bibliometrics, that is, citation data and the statistics derived from them. There is a belief that citation statistics are inherently more accurate because they substitute simple numbers for complex judgments, and hence overcome the possible subjectivity of peer review. But this belief is unfounded.

  • Relying on statistics is not more accurate when the statistics are improperly used. Indeed, statistics can mislead when they are misapplied or misunderstood. Much of modern bibliometrics seems to rely on experience and intuition about the interpretation and validity of citation statistics.
  • While numbers appear to be "objective", their objectivity can be illusory. The meaning of a citation can be even more subjective than peer review. Because this subjectivity is less obvious for citations, those who use citation data are less likely to understand their limitations.
  • The sole reliance on citation data provides at best an incomplete and often shallow understanding of research—an understanding that is valid only when reinforced by other judgments. Numbers are not inherently superior to sound judgments

BMC’s Impact Factors: Elsevier’s Take and Reactions to It

A growing body of research suggests that open access may increase the impact of scholarly literature (see Steve Hitchcock’s "Effect of Open Access and Downloads ("Hits") on Citation Impact: A Bibliography of Studies"). Consequently, "impact factors" play an important part in the ongoing dialog about the desirability of the open access model.

On June 23, 2005, BioMed Central issued a press release entitled "Open Access Journals Get Impressive Impact Factors" that discussed the impact factors for their journals. You can consult the press release for the details, but the essence of it was expressed in this quote from Matthew Cockerill, Director of Operations at BioMed Central:

These latest impact factors show that BioMed Central’s Open Access journals have joined the mainstream of science publishing, and can compete with traditional journals on their own terms. The impact factors also demonstrate one of the key benefits that Open Access offers authors: high visibility and, as a result, a high rate of citation.

On July 8, 2005, Tony McSean, Director of Library Relations for Elsevier, sent an e-mail message to SPARC-OAForum@arl.org "(OA and Impressive Impact Factors—Non Propter Hoc") that presented Elsevier’s analysis of the BMC data, putting it "into context with those of the major subscription-based publishers." Again, I would encourage you to read this analysis. The gist of the argument is as follows:

This comparison with four major STM publishers demonstrates that BMC’s overall IF results are unremarkable, and that they certainly do not provide evidence to support the common assertion that the open access publishing model increases impact factor scores.

My reaction was as follows.

These interesting observations do not appear to account for one difference between BMC journals and the journals of other publishers: their age. Well-established, older journals are more likely to have attained the credibility required for high IFs than newer ones (if they ever will attain such credibility).

Moreover, there is another difference: BMC journals are primarily e-journals, not print journals with derivative electronic counterparts. Although true e-journals have gained significant ground, I suspect that they still start out with a steeper hill to climb credibility-wise than traditional print journals.

Third, since it involves paying a fee, the author-pays model requires a higher motivation on the part of the author to publish in such journals, likely leading to a smaller pool of potential authors. To obtain high journal IFs, these had better be good authors. And, for good authors to publish in such journals, they must hold them in high regard because they have other alternatives.

So, if this analysis is correct, for BMC journals to have attained "unremarkable" IFs is a notable accomplishment because they have attained parity with conventional journals that have some significant advantages.

Earlier in the day, Dr. David Goodman, Associate Professor of the Palmer School of Library and Information Science, commented (unbeknownst to me since I read the list in digest form):

1/ I doubt anyone is contending that at this point any of the
BMC titles are better than the best titles from other publishers. The point is that they are at least as good as the average, and the best of them well above average. For a new publisher, that is a major accomplishment—and one that initially seemed rather doubtful. . . .

2/ Normally, publishing in a relative obscure and newly founded journal would come at some disadvantage to the author, regardless of how the journal was financed. . . .

3/ You can’t judge OA advantage from IF alone. IF refers to journals, OA advantage refers to individual articles. The most convincing studies on OA advantage are those with paired comparisons of articles, as Stevan Harnad has explained in detail.

4/ Most of the BMC titles, the ones beginning with the BMC journal of…, are OA completely. For the ones with Toll Access reviews etc., there is obviously much less availability of those portions than the OA primary research, so I doubt the usual review journal effect applies to the same extent as usual.

On July 9, 2005, Matt Cockerill sent a rebuttal to the SPARC-OAForum that said in part:

Firstly, the statistics you give are based on the set of journals that have ISI impact factors (in fact, they cover only journals which had 2003 Impact Factors). . . . Many of BioMed Central’s best journals are not yet tracked by ISI.

Secondly, comparing the percentage of Impact Factors going up or down does not seem a particularly meaningful metric. What is important, surely, is the actual value of the Impact Factor (relative to others in the field). In that regard, BioMed Central titles have done extremely well, and several are close to the top of their disciplines. . . .

Thirdly, you raise the point that review articles can boost a journal’s Impact Factor, and that many journals publish review articles specifically with the intention of improving their Impact Factor. This is certainly true, but of BioMed Central’s 130+ journals, all but six are online research journals, and publish virtually no review articles whatsoever. . . .

No reply yet from Elsevier, but, whether there is or not, I’m sure that we have not heard the last of the "impact factor" argument.

Stevan Harnad has made it clear that what he calls the "journal-affordability problem" is not the focus of open access (this is perhaps best expressed in Harnad et al.’s "The Access/Impact Problem and the Green and Gold Roads to Open Access"). The real issue is the "research article access/impact problem":

Merely to do the research and then put your findings in a desk drawer is no better than not doing the research at all. Researchers must submit their research to peer review and then "publish or perish," so others can use and apply their findings. But getting findings peer-reviewed and published is not enough either. Other researchers must find the findings useful, as proved by their actually using and citing them. And to be able to use and cite them, they must first be able to access them. That is the research article access/impact problem.

To see that the journal-affordability problem and the article access/impact problem are not the same one need only note that even if all 24,000 peer-reviewed research journals were sold to universities at cost (i.e., with not a penny of profit) it would still be true that almost no university has anywhere near enough money to afford all or even most of the 24,000 journals, even at minimal access-tolls (http://fisher.lib.virginia.edu/cgi-local/arlbin/arl.cgi?task=setuprank). Hence, it would remain true even then that not all would-be users could access all of the yearly 2.5 million articles, and hence that that potential research impact would continue to be lost.

So although the two problems are connected (lower journal prices would indeed generate somewhat more access), solving the journal-affordability problem does not solve the research access/impact problem.

Of course, there are different views of open access, but, for the moment, let’s say that this view is the prevailing one and that this is the most compelling argument to win the hearts and minds of scholars for open access. Open access will rise or fall based on its demonstrated ability to significantly boost impact factors, and the battle to prove or disprove this effect will be fierce indeed.