"The Rich Get Richer and the Poor Get Poorer: The Effect of Open Access on Cites to Science Journals Across the Quality Spectrum"

Mark J. McCabe and Christopher M. Snyder have self-archived "The Rich Get Richer and the Poor Get Poorer: The Effect of Open Access on Cites to Science Journals Across the Quality Spectrum" in SSRN.

Here's an excerpt:

An open-access journal allows free online access to its articles, obtaining revenue from fees charged to submitting authors. Using panel data on science journals, we are able to circumvent some problems plaguing previous studies of the impact of open access on citations. We find that moving from paid to open access increases cites by 8% on average in our sample, but the effect varies across the quality of content. Open access increases cites to the best content (top-ranked journals or articles in upper quintiles of citations within a volume) but reduces cites to lower-quality content. We construct a model to explain these findings in which being placed on a broad open-access platform can increase the competition among articles for readers' attention. We can find structural parameters allowing the model to fit the quintile results quite closely.

Digital Scholarship | Digital Scholarship Publications Overview | Sitemap

Article-Level Metrics—A SPARC Primer

SPARC has released Article-Level Metrics—A SPARC Primer.

Here's an excerpt:

Article-Level Metrics (ALMs) are rapidly emerging as important tools to quantify how individual articles are being discussed, shared, and used. ALMs can be employed in conjunction with existing metrics, which have traditionally focused on the long-term impact of a collection of articles (i.e., a journal) based on the number of citations generated. This primer is designed to give campus leaders and other interested parties an overview of what ALMs are, why they matter, how they complement established utilities, and how they can be used in the tenure and promotion process.

Digital Scholarship | Digital Scholarship Publications Overview | Sitemap

"New Opportunities for Repositories in the Age of Altmetrics"

Stacy Konkiel and Dave Scherer have published "New Opportunities for Repositories in the Age of Altmetrics" in the latest issue of the Bulletin of the Association for Information Science and Technology.

Here's an excerpt:

By reporting altmetrics (alternative metrics based on online activity) for their content, institutional repositories can add value to existing metrics—and prove their relevance and importance in an age of growing cutbacks to library services. This article will discuss the metrics that repositories currently deliver and how altmetrics can supplement existing usage statistics to provide a broader interpretation of research-output impact for the benefit of authors, library-based publishers and repository managers, and university administrators alike.

| Digital Scholarship | Digital Scholarship Publications Overview | Sitemap |

"F1000 Recommendations as a New Data Source for Research Evaluation: A Comparison with Citations"

Ludo Waltman and Rodrigo Costas have self-archived "F1000 Recommendations as a New Data Source for Research Evaluation: A Comparison with Citations" in arXiv.org.

Here's an excerpt:

F1000 is a post-publication peer review service for biological and medical research. F1000 aims to recommend important publications in the biomedical literature, and from this perspective F1000 could be an interesting tool for research evaluation. By linking the complete database of F1000 recommendations to the Web of Science bibliographic database, we are able to make a comprehensive comparison between F1000 recommendations and citations. We find that about 2% of the publications in the biomedical literature receive at least one F1000 recommendation. Recommended publications on average receive 1.30 recommendations, and over 90% of the recommendations are given within half a year after a publication has appeared. There turns out to be a clear correlation between F1000 recommendations and citations. However, the correlation is relatively weak, at least weaker than the correlation between journal impact and citations.

| Digital Scholarship | Digital Scholarship Publications Overview | Sitemap |

"Manipulating Google Scholar Citations and Google Scholar Metrics: Simple, Easy and Tempting"

Emilio Delgado López-Cózar, Nicolás Robinson-García, and Daniel Torres-Salinas have self-archived "Manipulating Google Scholar Citations and Google Scholar Metrics: Simple, Easy and Tempting" in arXiv.org.

Here's an excerpt:

The launch of Google Scholar Citations and Google Scholar Metrics may provoke a revolution in the research evaluation field as it places within every researchers reach tools that allow bibliometric measuring. In order to alert the research community over how easily one can manipulate the data and bibliometric indicators offered by Google's products we present an experiment in which we manipulate the Google Citations profiles of a research group through the creation of false documents that cite their documents, and consequently, the journals in which they have published modifying their H index. . . . We analyse the malicious effect this type of practices can cause to Google Scholar Citations and Google Scholar Metrics. Finally, we conclude with several deliberations over the effects these malpractices may have and the lack of control tools these tools offer.

| Digital Scholarship's Digital/Print Books | Digital Scholarship |

"On the Impact of Gold Open Access Journals"

Christian Gumpenberger, María-Antonia Ovalle-Perandones, and Juan Gorraiz have self-archived "On the Impact of Gold Open Access Journals" in U: Scholar.

Here's an excerpt:

This study identified the current set of Gold Open Access journals featuring a Journal Impact Factor (JIF) by means of Ulrichsweb, Directory of Open Access Journals and Journal Citation Reports (JCR). The results were analyzed regarding disciplines, countries, quartiles of the JIF distribution in JCR and publishers. Furthermore the temporal impact evolution was studied for a Top 50 titles list (according to JIF) by means of Journal Impact Factor, SJR and SNIP in the time interval 2000-2010. The identified top Gold Open Access journals proved to be well-established and their impact is generally increasing for all the analyzed indicators. The majority of JCR-indexed OA journals can be assigned to Life Sciences and Medicine. The success-rate for JCR inclusion differs from country to country and is often inversely proportional to the number of national OA journal titles.

Transforming Scholarly Publishing through Open Access: A Bibliography Cover

| Digital Scholarship | Transforming Scholarly Publishing through Open Access: A Bibliography |

| Digital Scholarship |

Thomson Reuters Launches Data Citation Index

Thomson Reuters has launched the Data Citation Index within the Web of Knowledge.

Here's an excerpt from the press release:

This new research resource from Thomson Reuters creates a single source of discovery for scientific, social sciences and arts and humanities information. It provides a single access point to discover foundational research within data repositories around the world in the broader context of peer-reviewed literature in journals, books, and conference proceedings already indexed in the Web of Knowledge. . . .

The Thomson Reuters Data Citation Index makes research within the digital universe discoverable, citable and viewable within the context of the output the data has informed. Thomson Reuters partnered with numerous data repositories worldwide to capture bibliographic records and cited references for digital research, facilitating visibility, author attribution, and ultimately the measurement of impact of this growing body of scholarship.

| Digital Curation Bibliography: Preservation and Stewardship of Scholarly Works | Digital Scholarship |

"Open Access versus Subscription Journals: A Comparison of Scientific Impact"

Bo-Christer Björk and David Solomon have published "Open Access versus Subscription Journals: A Comparison of Scientific Impact" in BMC Medicine.

Here's an excerpt:

Overall, average citation rates, both unweighted and weighted for the number of articles per journal, were about 30% higher for subscription journals. However, after controlling for discipline (medicine and health versus other), age of the journal (three time periods) and the location of the publisher (four largest publishing countries versus other countries) the differences largely disappeared in most subcategories except for journals that had been launched prior to 1996. OA journals that fund publishing with article processing charges (APCs) are on average cited more than other OA journals. In medicine and health, OA journals founded in the last 10 years are receiving about as many citations as subscription journals launched during the same period.

| Transforming Scholarly Publishing through Open Access: A Bibliography | Digital Scholarship |

"Open Metrics for Open Repositories"

Brian Kelly, Nick Sheppard, Jenny Delasalle, Mark Dewey, Owen Stephens, Gareth J Johnson, and Stephanie Taylor have self-archived "Open Metrics for Open Repositories" in University of Bath Research.

Here's an excerpt:

Increasingly there is a need for quantitative evidence in order to help demonstrate the value of online services. Such evidence can also help to detect emerging patterns of usage and identify associated operational best practice. This paper seeks to initiate a discussion on approaches to metrics for institutional repositories by providing a high-level overview of the benefits of metrics for a variety of stakeholders. The paper outlines the potential benefits which can be gained from providing richer statistics related to the use of institutional repositories and also reviews related work in this area. The authors describe a JISC-funded project which harvested a large number of repositories in order to identify patterns of use of metadata attributes and summarise the key findings. The paper provides a case study which reviews plans to provide a richer set of statistics within one institutional repository as well as requirements from the researcher community. An example of how third-party aggregation services may provide metrics on behalf of the repository community is given. The authors conclude with a call for repository managers, developers and policy makers to be pro-active in providing open access to metrics for open repositories.

| Institutional Repository and ETD Bibliography 2011 | Digital Scholarship |

"Altmetrics in the Wild: Using Social Media to Explore Scholarly Impact"

Jason Priem, Heather A. Piwowar, and Bradley M. Hemminger have self-archived "Altmetrics in the Wild: Using Social Media to Explore Scholarly Impact" in arXiv.org.

Here's an excerpt:

In growing numbers, scholars are integrating social media tools like blogs, Twitter, and Mendeley into their professional communications. The online, public nature of these tools exposes and reifies scholarly processes once hidden and ephemeral. Metrics based on this activities could inform broader, faster measures of impact, complementing traditional citation metrics. This study explores the properties of these social media-based metrics or "altmetrics," sampling 24,331 articles published by the Public Library of Science. We find that different indicators vary greatly in activity. Around 5% of sampled articles are cited in Wikipedia, while close to 80% have been included in at least one Mendeley library. There is, however, an encouraging diversity; a quarter of articles have nonzero data from five or more different sources. Correlation and factor analysis suggest citation and altmetrics indicators track related but distinct impacts, with neither able to describe the complete picture of scholarly use alone. There are moderate correlations between Mendeley and Web of Science citation, but many altmetric indicators seem to measure impact mostly orthogonal to citation. Articles cluster in ways that suggest five different impact "flavors," capturing impacts of different types on different audiences; for instance, some articles may be heavily read and saved by scholars but seldom cited. Together, these findings encourage more research into altmetrics as complements to traditional citation measures.

| Scholarly Electronic Publishing Bibliography 2010 | Digital Scholarship |

"How the Scientific Community Reacts to Newly Submitted Preprints: Article Downloads, Twitter Mentions, and Citations"

Xin Shuai, Alberto Pepe, Johan Bollen have self-archived "How the Scientific Community Reacts to Newly Submitted Preprints: Article Downloads, Twitter Mentions, and Citations" in arXiv.org.

Here's an excerpt:

We analyze the online response of the scientific community to the preprint publication of scholarly articles. We employ a cohort of 4,606 scientific articles submitted to the preprint database arXiv.org between October 2010 and April 2011. We study three forms of reactions to these preprints: how they are downloaded on the arXiv.org site, how they are mentioned on the social media site Twitter, and how they are cited in the scholarly record. We perform two analyses. First, we analyze the delay and time span of article downloads and Twitter mentions following submission, to understand the temporal configuration of these reactions and whether significant differences exist between them. Second, we run correlation tests to investigate the relationship between Twitter mentions and both article downloads and article citations. We find that Twitter mentions follow rapidly after article submission and that they are correlated with later article downloads and later article citations, indicating that social media may be an important factor in determining the scientific impact of an article.

| Transforming Scholarly Publishing through Open Access: A Bibliography| Digital Scholarship Publications Overview |

"Evaluating Repository Annual Metrics for SCONUL"

Gareth James Johnson has self-archived "Evaluating Repository Annual Metrics for SCONUL" in the Leicester Research Archive.

Here's an excerpt:

This report is a summarisation of the responses to a recent survey of the UKCoRR membership concerning the use of full-text downloads as a repository performance metric within the SCONUL annual statistical survey. It hopes to present a representative snapshot of the current opinions in this area from repository managers.

| Digital Scholarship | Digital Scholarship Publications Overview | Institutional Repository Bibliography |

"Bibliometrics: A New Feature for Institutional Repositories"

Merceur Frederic, Le Gall Morgane, Salaun Annick have self-archived "Bibliometrics: A New Feature for Institutional Repositories" in Archimer.

Here's an excerpt:

In addition to its promotion and conservation objectives, Archimer, Ifremer’s institutional repository, offers a wide range of bibliometric tools described in this document.

As early as the recording stage, numerous automatic operations homogenize the information (author’s name, research body, department…), thus proving the quality of the bibliometric analyses.

Now, Archimer enables, among others, the automatic calculation of several indicators defined by Ifremer and the different ministries in charge in the framework of its four-year contract. It also offers various criteria aimed at analysing its document production (eg. distribution of the value of the journals' impact factors, evolution of the number of quotations in other publications, presentation of international collaborations…).

| Digital Scholarship | Digital Scholarship Publications Overview | Institutional Repository Bibliography |

"Citation Advantage of Open Access Legal Scholarship"

James M. Donovan and Carol A. Watson have self-archived "Citation Advantage of Open Access Legal Scholarship" in UKnowledge.

Here's an excerpt:

To date, there have been no studies focusing exclusively on the impact of open access on legal scholarship. We examine open access articles from three journals at the University of Georgia School of Law and confirm that legal scholarship freely available via open access improves an article's research impact. Open access legal scholarship – which today appears to account for almost half of the output of law faculties – can expect to receive 50% more citations than non-open access writings of similar age from the same venue.

| Digital Scholarship | Digital Scholarship Publications Overview | Reviews of Digital Scholarship Publications |

"Self-Selected or Mandated, Open Access Increases Citation Impact for Higher Quality Research"

Yassine Gargouri et al. have published "Self-Selected or Mandated, Open Access Increases Citation Impact for Higher Quality Research" in PLoS ONE.

Here's an excerpt:

Background

Articles whose authors have supplemented subscription-based access to the publisher's version by self-archiving their own final draft to make it accessible free for all on the web (“Open Access”, OA) are cited significantly more than articles in the same journal and year that have not been made OA. Some have suggested that this “OA Advantage” may not be causal but just a self-selection bias, because authors preferentially make higher-quality articles OA. To test this we compared self-selective self-archiving with mandatory self-archiving for a sample of 27,197 articles published 2002-2006 in 1,984 journals.

Methodology/Principal Findings

The OA Advantage proved just as high for both. Logistic regression analysis showed that the advantage is independent of other correlates of citations (article age; journal impact factor; number of co-authors, references or pages; field; article type; or country) and highest for the most highly cited articles. The OA Advantage is real, independent and causal, but skewed. Its size is indeed correlated with quality, just as citations themselves are (the top 20% of articles receive about 80% of all citations).

Conclusions/Significance

The OA advantage is greater for the more citable articles, not because of a quality bias from authors self-selecting what to make OA, but because of a quality advantage, from users self-selecting what to use and cite, freed by OA from the constraints of selective accessibility to subscribers only. It is hoped that these findings will help motivate the adoption of OA self-archiving mandates by universities, research institutions and research funders.

| Digital Scholarship |

"OpenAccess Statistics: Alternative Impact Measures for Open Access Documents?"

Ulrich Herb has self-archived "OpenAccess Statistics: Alternative Impact Measures for Open Access Documents? An Examination How to Generate Interoperable Usage Information from Distributed Open Access Services" in E-LIS.

Here's an excerpt:

Publishing and bibliometric indicators are of utmost relevance for scientists and research institutions as the impact or importance of a publication (or even of a scientist or an institution) is mostly regarded to be equivalent to a citation-based indicator, e.g. in form of the Journal Impact Factor or the Hirsch-Index. Both on an individual and an institutional level performance measurement depends strongly on these impact scores. This contribution shows that most common methods to assess the impact of scientific publications often discriminate Open Access publications — and by that reduce the attractiveness of Open Access for scientists. Assuming that the motivation to use Open Access publishing services (e.g. a journal or a repository) would increase if these services would convey some sort of reputation or impact to the scientists, alternative models of impact are discussed. Prevailing research results indicate that alternative metrics based on usage information of electronic documents are suitable to complement or to relativize citation-based indicators. Furthermore an insight into the project OpenAccess-Statistics OA-S is given. OA-S implemented an infrastructure to collect document-related usage information from distributed Open Access Repositories in an aggregator service in order to generate interoperable document access information according to three standards (COUNTER, LogEc and IFABC). The service also guarantees the deduplication of users and identical documents on different servers. In a second phase it is not only planned to implement added services like recommender features, but also to evaluate alternative impact metrics based on usage patterns of electronic documents.

"Self-Selected or Mandated, Open Access Increases Citation Impact for Higher Quality Research"

Yassine Gargouri, Chawki Hajjem, Vincent Lariviere, Yves Gingras, Tim Brody, Les Carr, Stevan Harnad have self-archived "Self-Selected or Mandated, Open Access Increases Citation Impact for Higher Quality Research" in the ECS EPrints Repository

Here's an excerpt:

Articles whose authors make them Open Access (OA) by self-archiving them online are cited significantly more than articles accessible only to subscribers. Some have suggested that this "OA Advantage" may not be causal but just a self-selection bias, because authors preferentially make higher-quality articles OA. To test this we compared self-selective self-archiving with mandatory self-archiving for a sample of 27,197 articles published 2002-2006 in 1,984 journals. The OA Advantage proved just as high for both. Logistic regression showed that the advantage is independent of other correlates of citations (article age; journal impact factor; number of co-authors, references or pages; field; article type; country or institution) and greatest for the most highly cited articles. The OA Advantage is real, independent and causal, but skewed. Its size is indeed correlated with quality, just as citations themselves are (the top 20% of articles receive about 80% of all citations). The advantage is greater for the more citeable articles, not because of a quality bias from authors self-selecting what to make OA, but because of a quality advantage, from users self-selecting what to use and cite, freed by OA from the constraints of selective accessibility to subscribers only. [See accompanying RTF file for responses to feedback. Four PDF files provide Supplementary Analysis.]

"Worldwide Use and Impact of the NASA Astrophysics Data System Digital Library"

Michael J. Kurtz et al. have self-archived "Worldwide Use and Impact of the NASA Astrophysics Data System Digital Library" in arXiv.org.

Here's the abstract:

By combining data from the text, citation, and reference databases with data from the ADS readership logs we have been able to create Second Order Bibliometric Operators, a customizable class of collaborative filters which permits substantially improved accuracy in literature queries. Using the ADS usage logs along with membership statistics from the International Astronomical Union and data on the population and gross domestic product (GDP) we develop an accurate model for world-wide basic research where the number of scientists in a country is proportional to the GDP of that country, and the amount of basic research done by a country is proportional to the number of scientists in that country times that country's per capita GDP.

We introduce the concept of utility time to measure the impact of the ADS/URANIA and the electronic astronomical library on astronomical research. We find that in 2002 it amounted to the equivalent of 736 FTE researchers, or $250 Million, or the astronomical research done in France. Subject headings: digital libraries; bibliometrics; sociology of science; information retrieval

"Open Access Archiving and Article Citations within Health Services and Policy Research"

Devon Greyson et al. have self-archived "Open Access Archiving and Article Citations within Health Services and Policy Research" in E-LIS.

Here's an excerpt:

Promoting uptake of research findings is an objective common to those who fund, produce and publish health services and policy research. Open access (OA) is one method being employed to maximize impact. OA articles are online, free to access and use. This paper contributes to growing body of research exploring the “OA advantage” by employing an article-level analysis comparing citation rates for articles drawn from the same, purposively selected journals. We used a two-stage analytic approach designed to test whether OA is associated with (1) likelihood that an article is cited at all and (2) total number citations that an article receives, conditional on being cited at least once. Adjusting for potential confounders: number of authors, time since publication, journal, and article subject, we found that OA archived articles were 60% more likely to be cited at least once, and, once cited, were cited 29% more than non-OA articles.

Hindawi’s Open Access Journals’ Impact Factor Up over 27%

Hindawi's open access journals' average impact factor is up over 27% in the last year.

Here's an excerpt from the press release on liblicense-l:

Hindawi Publishing Corporation is pleased to announce that it has seen very strong growth in the Impact Factors of its journals in the recently released 2008 Journal Citation Report published by Thomson Scientific. This most recent Journal Citation Report shows the average Impact Factor of Hindawi's journals increasing by more than 27% over the past year, with two of Hindawi's largest journals, EURASIP Journal on Advances in Signal Processing and Mathematical Problems in Engineering, rising by 70% and 45% respectively. . . .

In addition to the 14 journals that were included in the 2007 Journal Citation Report, three of Hindawi's journals received Impact Factors for the first time this year: Clinical and Developmental Immunology, EURASIP Journal on Wireless Communications and Networking, and Journal of Nanomaterials.

E-Journals: Their Use, Value and Impact

The Research Information Network has released E-Journals: Their Use, Value and Impact.

Here's an excerpt from the announcement:

The report was undertaken by the Centre for Information Behaviour and the Evaluation of Research (CIBER) at University College London for the RIN to provide a detailed analysis of how academic researchers in the UK have responded to the provision of e-journals, and how this has shaped their information seeking behaviour and their usage of e-journals. The project looked at:

  • investigating researchers behaviour: looking at levels and patterns of use, the content viewed and how they navigate to it
  • finding out how researchers' behaviours may vary by subjects and disciplines, and the type of university they study at
  • gathering and analysing evidence of relationships between researchers' behaviour and institutional spending on e-journals, and
  • gathering and analysing evidence of relations between researchers' behaviour and research productivity, outputs, including number of publications produced, citations attracted and the results of research evaluation.

2007 Impact Factors for PLoS Journals Released

The Public Library of Science has reported the 2007 impact factors for its journals as calculated by Thomson Reuters:

  • PLoS Biology: 13.5
  • PLoS Medicine: 12.6
  • PLoS Computational Biology: 6.2
  • PLoS Genetics: 8.7
  • PLoS Pathogens: 9.3

Here's an excerpt from the press release:

As we and others have frequently pointed out, impact factors should be interpreted with caution and only as one of a number of measures which provide insight into a journal’s, or rather its articles’, impact. Nevertheless, the 2007 figures for PLoS Biology and PLoS Medicine are consistent with the many other indicators (e.g. submission volume, web statistics, reader and community feedback) that these journals are firmly established as top-flight open-access general interest journals in the life and health sciences respectively.

The increases in the impact factors for the discipline-based, community-run PLoS journals also tally with indicators that these journals are going from strength to strength. For example, submissions to PLoS Computational Biology, PLoS Genetics and PLoS Pathogens have almost doubled over the past year—each journal now routinely receives 80-120 submissions per month of which around 20-25 are published. . . .

Although Thomson is yet to index our two youngest journals, other indexing databases are. The subscription-only Scopus citation index (owned by Elsevier and, incidentally, including many more journals than Thomson’s offering) is already covering PLoS ONE (though so far, only as far back as June 2007). But authors don’t need to rely on subscription-only indexes such as those owned by Thomson and Elsevier, and can instead use the freely-available Google Scholar. Using Google Scholar, for example, one can find that the article by Neal Fahlgren and coauthors, about the cataloguing of an important class of RNA in plants and one of the most highly cited PLoS ONE articles so far has been cited 42 times—strong evidence that good research, even if published in a new journal, will rapidly find its place in the scientific record when it’s made freely available to all.

Citation Statistics Report Released

The International Mathematical Union in cooperation with the International Council of Industrial and Applied Mathematics and the Institute of Mathematical Statistics have released Citation Statistics.

Here's an excerpt from the Executive Summary:

This is a report about the use and misuse of citation data in the assessment of scientific research. The idea that research assessment must be done using "simple and objective" methods is increasingly prevalent today. The "simple and objective" methods are broadly interpreted as bibliometrics, that is, citation data and the statistics derived from them. There is a belief that citation statistics are inherently more accurate because they substitute simple numbers for complex judgments, and hence overcome the possible subjectivity of peer review. But this belief is unfounded.

  • Relying on statistics is not more accurate when the statistics are improperly used. Indeed, statistics can mislead when they are misapplied or misunderstood. Much of modern bibliometrics seems to rely on experience and intuition about the interpretation and validity of citation statistics.
  • While numbers appear to be "objective", their objectivity can be illusory. The meaning of a citation can be even more subjective than peer review. Because this subjectivity is less obvious for citations, those who use citation data are less likely to understand their limitations.
  • The sole reliance on citation data provides at best an incomplete and often shallow understanding of research—an understanding that is valid only when reinforced by other judgments. Numbers are not inherently superior to sound judgments

BMC’s Impact Factors: Elsevier’s Take and Reactions to It

A growing body of research suggests that open access may increase the impact of scholarly literature (see Steve Hitchcock’s "Effect of Open Access and Downloads ("Hits") on Citation Impact: A Bibliography of Studies"). Consequently, "impact factors" play an important part in the ongoing dialog about the desirability of the open access model.

On June 23, 2005, BioMed Central issued a press release entitled "Open Access Journals Get Impressive Impact Factors" that discussed the impact factors for their journals. You can consult the press release for the details, but the essence of it was expressed in this quote from Matthew Cockerill, Director of Operations at BioMed Central:

These latest impact factors show that BioMed Central’s Open Access journals have joined the mainstream of science publishing, and can compete with traditional journals on their own terms. The impact factors also demonstrate one of the key benefits that Open Access offers authors: high visibility and, as a result, a high rate of citation.

On July 8, 2005, Tony McSean, Director of Library Relations for Elsevier, sent an e-mail message to SPARC-OAForum@arl.org "(OA and Impressive Impact Factors—Non Propter Hoc") that presented Elsevier’s analysis of the BMC data, putting it "into context with those of the major subscription-based publishers." Again, I would encourage you to read this analysis. The gist of the argument is as follows:

This comparison with four major STM publishers demonstrates that BMC’s overall IF results are unremarkable, and that they certainly do not provide evidence to support the common assertion that the open access publishing model increases impact factor scores.

My reaction was as follows.

These interesting observations do not appear to account for one difference between BMC journals and the journals of other publishers: their age. Well-established, older journals are more likely to have attained the credibility required for high IFs than newer ones (if they ever will attain such credibility).

Moreover, there is another difference: BMC journals are primarily e-journals, not print journals with derivative electronic counterparts. Although true e-journals have gained significant ground, I suspect that they still start out with a steeper hill to climb credibility-wise than traditional print journals.

Third, since it involves paying a fee, the author-pays model requires a higher motivation on the part of the author to publish in such journals, likely leading to a smaller pool of potential authors. To obtain high journal IFs, these had better be good authors. And, for good authors to publish in such journals, they must hold them in high regard because they have other alternatives.

So, if this analysis is correct, for BMC journals to have attained "unremarkable" IFs is a notable accomplishment because they have attained parity with conventional journals that have some significant advantages.

Earlier in the day, Dr. David Goodman, Associate Professor of the Palmer School of Library and Information Science, commented (unbeknownst to me since I read the list in digest form):

1/ I doubt anyone is contending that at this point any of the
BMC titles are better than the best titles from other publishers. The point is that they are at least as good as the average, and the best of them well above average. For a new publisher, that is a major accomplishment—and one that initially seemed rather doubtful. . . .

2/ Normally, publishing in a relative obscure and newly founded journal would come at some disadvantage to the author, regardless of how the journal was financed. . . .

3/ You can’t judge OA advantage from IF alone. IF refers to journals, OA advantage refers to individual articles. The most convincing studies on OA advantage are those with paired comparisons of articles, as Stevan Harnad has explained in detail.

4/ Most of the BMC titles, the ones beginning with the BMC journal of…, are OA completely. For the ones with Toll Access reviews etc., there is obviously much less availability of those portions than the OA primary research, so I doubt the usual review journal effect applies to the same extent as usual.

On July 9, 2005, Matt Cockerill sent a rebuttal to the SPARC-OAForum that said in part:

Firstly, the statistics you give are based on the set of journals that have ISI impact factors (in fact, they cover only journals which had 2003 Impact Factors). . . . Many of BioMed Central’s best journals are not yet tracked by ISI.

Secondly, comparing the percentage of Impact Factors going up or down does not seem a particularly meaningful metric. What is important, surely, is the actual value of the Impact Factor (relative to others in the field). In that regard, BioMed Central titles have done extremely well, and several are close to the top of their disciplines. . . .

Thirdly, you raise the point that review articles can boost a journal’s Impact Factor, and that many journals publish review articles specifically with the intention of improving their Impact Factor. This is certainly true, but of BioMed Central’s 130+ journals, all but six are online research journals, and publish virtually no review articles whatsoever. . . .

No reply yet from Elsevier, but, whether there is or not, I’m sure that we have not heard the last of the "impact factor" argument.

Stevan Harnad has made it clear that what he calls the "journal-affordability problem" is not the focus of open access (this is perhaps best expressed in Harnad et al.’s "The Access/Impact Problem and the Green and Gold Roads to Open Access"). The real issue is the "research article access/impact problem":

Merely to do the research and then put your findings in a desk drawer is no better than not doing the research at all. Researchers must submit their research to peer review and then "publish or perish," so others can use and apply their findings. But getting findings peer-reviewed and published is not enough either. Other researchers must find the findings useful, as proved by their actually using and citing them. And to be able to use and cite them, they must first be able to access them. That is the research article access/impact problem.

To see that the journal-affordability problem and the article access/impact problem are not the same one need only note that even if all 24,000 peer-reviewed research journals were sold to universities at cost (i.e., with not a penny of profit) it would still be true that almost no university has anywhere near enough money to afford all or even most of the 24,000 journals, even at minimal access-tolls (http://fisher.lib.virginia.edu/cgi-local/arlbin/arl.cgi?task=setuprank). Hence, it would remain true even then that not all would-be users could access all of the yearly 2.5 million articles, and hence that that potential research impact would continue to be lost.

So although the two problems are connected (lower journal prices would indeed generate somewhat more access), solving the journal-affordability problem does not solve the research access/impact problem.

Of course, there are different views of open access, but, for the moment, let’s say that this view is the prevailing one and that this is the most compelling argument to win the hearts and minds of scholars for open access. Open access will rise or fall based on its demonstrated ability to significantly boost impact factors, and the battle to prove or disprove this effect will be fierce indeed.