"The presented results pertain to what we believe to be the largest and most thorough survey of usage- and citation based measures of scientific impact."
I agree with the above statement and I definitely like this thorough comparison of a large number of "impact measures." The article is also informative and has introduced in detail many methods for evaluation of scientific literature with which I was not familiar.
However, on reading the article, I had the following concerns regarding its reliability:
1) Most of the readers will only read the abstract, especially because the article is full of statistical and technical terms. The conclusion of the abstract is not so informative and poorly represents the insightful discussion at the end of the article.
In particular, I would have preferred a positive than a negative conclusion. I would have preferred a recommendation of which measures correlate better with each of the multiple dimensions of scientific attributes (e.g., quality, prestige, impact, immediacy, etc.) rather than the--rather obsolete--conclusion that JIF is not optimal and should be "used with caution." I have read more than 20 articles and editorials (including those in Science, JBC, JCI, PLoS) written in the past two years and stating that JIF should be used with caution.
2) I agree with the authors that JIF is misused; however, a value cannot be blamed for what it does not stand for. I believe the authors have given more importance to JIF (probably because of its "impact" on the scientific community), and I think that this has affected the objectiveness of the paper.
3) As the authors state in the introduction, until now I'm not sure whether how "scientific impact" is defined. Is it "journal impact", "article impact", or "scientist impact"? And which of these matters more? However, the authors seem to have committed the same unfair comparison that JIF and SJR do: measuring articles, scientists, and even "science" itself by the journals rather than by the articles. Journal-level metrics simply mean that an article is evaluated mostly prior to its publication. Once a scientist "makes it to Science or Nature," he or she celebrates even if the article will never be cited again!
4) Once more I declare my agreement with the authors that JIF is neither the most accurate nor the fairest way to measure scientists, articles, or even journals. However, this "conclusion" is clearly stated in the introduction (quoted below). Why the analysis then?
"The JIF is now commonly used to measure the impact of journals and by extension the impact of the articles they have published, and by even further extension the authors of these articles, their departments, their universities and even entire countries. However, the JIF has a number of undesirable properties which have been extensively discussed in the literature [2], [3], [4], [5], [6]. This had led to a situation in which most experts agree that the JIF is a far from perfect measure of scientific impact but it is still generally used because of the lack of accepted alternative"
5) One final concern/question.
Citation-based metrics take into consideration journals that are technologically behind (for many "non-science-related" reasons, including funding problems, poor management, being published in a developing country, etc.) and thus do not have well established web sites but are still citable and cited. Do the "usage-based metrics" just ignore those journals?
Saturday, June 27, 2009
Failures of citation based rating - new analysis
Saturday, June 20, 2009
Need for qualitative assessment of biomedical research
Here comes a new PLoS ONE article describing one of the most authoritative analyses of the research impact - by none other than the Wellcome Trust. The research conducted by experts of the Trust summates that authoritative opinions about a published research finding constitute important benchmark of the quality of biomedical research. These data vindicate stand of the advocates of post publication peer review (and I am one humble volunteer) that modern day qualitative indicators are extremely necessary to judge the impact of biomedical research findings. Not only that this article supports and strengthens cause of the 'Faculty of 1000' but also that of PLoS ONE, although indirectly. The latter is no doubt the most successful forerunner of the idea of post-publication peer review and qualitative assessment while harnessing the web2.0 based semantic tools for such purposes. At this critical juncture, it is time for the concerned institutions to retrospect about their practices of evaluating research productivity of scientists based on bibliometric indices (such as the 'impact factor') alone.
Friday, June 19, 2009
New genome article added to 'PLoS ONE prokaryotic genome collection'
Standards for genome data reporting: should we go about it?
As a next critical step, the GSC are now starting to ask journals to require that new genome/metagenome publications be accompanied by completed 'Minumum Information about a Genome Sequence (MIGS/MIMS)' reports.
This sounds a wonderful proposition and I guess PLoS journals in this connection could lead headway as they already insist for adherence to certain other standards such as MIAME for reporting microarray data). Until this point, it is all OK. But some people feel that 'monopolizing' standards could be a kind of 'suffocation'. However, I am sure this will not lead to the kind of 'suffocating monopoly' created by certain 'nomenclature commissions' and their 'mouthpiece journals' in the area of taxonomy and systematics.
I discussed this with one of my friend, a genomic/bioinformatics expert and he says ".. my problem with standards is also not only the monopoly, but that it is also really hard to set a minimal role of meta data that need to be entered per genome. I suffer from the lack of organized meta data; but once the entry is enforced, people will just start putting anything to fill the tables and get their data out, which will lead to the opposite of what standards are supposed to achieve".
Given above, it is clear that some discussion and brainstorming is nevertheless required before journals start insisting for the MIGS/MIMS reports. I can not find a better place than PLoS ONE (sandbox) to discuss and resolve such issues.
Saturday, June 13, 2009
New F1000 Evaluation of a PLoS ONE article
Myers BR, Sigal YM, Julius D
PLoS ONE 2009 4(5):e5741 [abstract on PubMed] [citations on Google Scholar] [related articles] [FREE full text]
References: {1} McKemy et al. Nature 2002, 416:52-8 [PMID:11882888]. {2} Peier et al. Cell 2002, 108:705-15 [PMID:11893340].
Wednesday, June 10, 2009
Single Cell Genomics: New PLoS ONE article evaluated at Faculty of 1000
BLoG ONE has moved to Word Press
PLoS ONE Prokaryotic Genome Collection - now launched
I am excited to tell you of the latest collection of some of the high-impact articles, the PLoS ONE Prokaryotic Genome Collection. Liz Allen of PLoS, has some more things to say …read her full blog post here.
There is an editorial overview that accompanies the new collection; it’s written by me. Comments related to the collection and the ‘overview’ have started to trickle in, such as this one by Dr Ramy Aziz:
“This article lists very interesting challenges and questions that will be answered in the next decade of this millennium. With the revolution stirred by next-gen sequencing machines, sequencing/resequencing steps have become quick and cheap. Thus, data generation is the least part to worry about. However, as the article appropriately discusses, the problem is what to sequence and then how to make sense out of the piles.
We will very soon have 5,000 fully sequenced prokaryotic genomes, but, as quick annotation tools are being developed, we realize very well that more genomes annotated = more errors propagated.
In addition to high-speed and high-performance …” … Read more here.