"The presented results pertain to what we believe to be the largest and most thorough survey of usage- and citation based measures of scientific impact."
I agree with the above statement and I definitely like this thorough comparison of a large number of "impact measures." The article is also informative and has introduced in detail many methods for evaluation of scientific literature with which I was not familiar.
However, on reading the article, I had the following concerns regarding its reliability:
1) Most of the readers will only read the abstract, especially because the article is full of statistical and technical terms. The conclusion of the abstract is not so informative and poorly represents the insightful discussion at the end of the article.
In particular, I would have preferred a positive than a negative conclusion. I would have preferred a recommendation of which measures correlate better with each of the multiple dimensions of scientific attributes (e.g., quality, prestige, impact, immediacy, etc.) rather than the--rather obsolete--conclusion that JIF is not optimal and should be "used with caution." I have read more than 20 articles and editorials (including those in Science, JBC, JCI, PLoS) written in the past two years and stating that JIF should be used with caution.
2) I agree with the authors that JIF is misused; however, a value cannot be blamed for what it does not stand for. I believe the authors have given more importance to JIF (probably because of its "impact" on the scientific community), and I think that this has affected the objectiveness of the paper.
3) As the authors state in the introduction, until now I'm not sure whether how "scientific impact" is defined. Is it "journal impact", "article impact", or "scientist impact"? And which of these matters more? However, the authors seem to have committed the same unfair comparison that JIF and SJR do: measuring articles, scientists, and even "science" itself by the journals rather than by the articles. Journal-level metrics simply mean that an article is evaluated mostly prior to its publication. Once a scientist "makes it to Science or Nature," he or she celebrates even if the article will never be cited again!
4) Once more I declare my agreement with the authors that JIF is neither the most accurate nor the fairest way to measure scientists, articles, or even journals. However, this "conclusion" is clearly stated in the introduction (quoted below). Why the analysis then?
"The JIF is now commonly used to measure the impact of journals and by extension the impact of the articles they have published, and by even further extension the authors of these articles, their departments, their universities and even entire countries. However, the JIF has a number of undesirable properties which have been extensively discussed in the literature [2], [3], [4], [5], [6]. This had led to a situation in which most experts agree that the JIF is a far from perfect measure of scientific impact but it is still generally used because of the lack of accepted alternative"
5) One final concern/question.
Citation-based metrics take into consideration journals that are technologically behind (for many "non-science-related" reasons, including funding problems, poor management, being published in a developing country, etc.) and thus do not have well established web sites but are still citable and cited. Do the "usage-based metrics" just ignore those journals?
Saturday, June 27, 2009
Failures of citation based rating - new analysis
Ramy Aziz, one of the most enthusiastic members of the PLoS ONE Editorial Board has very nicely commented on a recently published PLoS ONE article which highlights the failures of the so called 'impact factor' based rating of the value of science journals. The journal impact factors were released last week itself and thus the analysis presented in the article is very timely, indeed. Below are Ramy's comments to which I agree 100%:
Subscribe to:
Posts (Atom)