While studying at the University of Florence in my undergrad times I had the honour of being mentored by the late Prof. Ivano Bertini. Prof. Bertini was “unusual”: energetic, funny, passionate and sometimes very hard to deal with. All this made him the amazing scientist and person he was. In a country where it was (and still is) very difficult to gather funding for scientific research, he managed to create the CERM, a centre of excellence for NMR-based research. I remember him getting very angry for something that at the time I didn’t grasp, but of which I understood the importance over the time.
The Professor used to say something like: “Don’t make your students publish on journals with very low impact factor. This will affect their future careers”.
This is a bold statement and I reckon that in the current “publish or perish” environment it is probably most of the times not possible (and in the case of very specialistic scientific fields maybe not advisable, as I will discuss later in the post) to respect it, but whether or not Prof. Bertini was right, this was the first time I heard somebody speaking about impact factor.
I did not understand the relevance and importance of the Professor’s remark until later in my career, but from the day I started teaching, I made a promise to myself that I will always explain to my students the concept and importance of the impact factor. This has led me to discover that the majority of undergraduate students (even in their fourth year sometimes) have no clue of what an impact factor is. Shocking! Why is this not explained any sooner? The current scientific community is based on the impact of its scientific publications. Grants are won based on impact factors; academic positions are achieved based on it, too. That you agree or not that this is the best system does not matter: scientists are all bound to it one way or another, and it is THE recognised system, to which everyone must abide.
Let’s step back a moment and look into the origin and meaning of the impact factor. The concept was first introduced in 1955 by Dr. Eugene Garfield, who also founded the Institute for Scientific Information (ISI), now part of Thomson Reuters. Thomson is famous for having introduced citation indexes that allowed to generate computer-compiled statistical reports, such as the Journal Citation Reports (JCR) where the impact factors are published annually.
The web of science offers the following definition of impact factor:
“it [the impact factor] is a measure of the frequency with which the “average article” in a journal has been cited in a particular year or period. The annual JCR impact factor is a ratio between citations and recent citable items published. Thus, the impact factor of a journal is calculated by dividing the number of current year citations to the source items published in that journal during the previous two years.”
Impact factors are comparative tools that have no meaning when taken alone, but which gain importance when used to evaluate the level of different journals that publish papers about the same scientific topic. Journals belonging to different areas of research should never be compared based on their impact factors; this would have no meaning. For example, Angewandte Chemie is one of the most famous chemistry journals, with an impact factor of about 11. Cell, instead, which is more oriented towards biological research, has an impact factor of about 32. Nevertheless, the two journals are both leaders in their respective fields.
Something else to consider is that very specialised journals will be cited less because fewer people will be involved in the research of that narrow field.
The concept of impact factor has generated controversy over the years. A very interesting paper by Professor P. O. Seglen (that I totally recommend, see here) speaks about the abuse and misuse of the impact factor to define, not only a publication in a journal, but also the author’s scientific achievements. The impact factor has lately been used to establish the suitability of a researcher for a determined academic appointment or for evaluating resources allocation. The consequence of this behavior is at the expense of the specialist journals mentioned above: researchers tend to select higher impact factors for their publications, whilst instead choosing a specialised journal would be more effective for reaching a better interested audience.
The inventor of the impact factor himself says:
“In 1955, it did not occur to me that “impact” would one day become so controversial. Like nuclear energy, the impact factor is a mixed blessing.”
However we might feel about it, for now, the impact factor remains the best instrument we have to evaluate research. It is a tool widely used and accepted by the entire scientific community, and should, therefore, be introduced and thought more thoroughly to young scientists at the very beginning of their careers. Please educators in all fields, reflect on this and take action.
References:
http://wokinfo.com/essays/impact-factor/
Seglen P.O., Why the impact factor of journals should not be used for evaluating research, BMJ, 314, 498-502, 1997
E. Garfield, The Agony and the Ecstasy—The History and Meaning of the Journal Impact Factor, International Congress on Peer Review And Biomedical Publication, Chicago, September 16, 2005
E. Garfield, The meaning of the Impact Factor, INT J CLIN HLTH PSYC, 3, 2, 363-369, 2003
K. Satyanarayana, Anju Sharma, Impact factor: Time to move on, Indian J Med Res 127, 4-6, 2008
Share your comments and feedback