NFAIS Community Forums : NFAIS Advances - Join the discussion!
Welcome Guest   
 
 Subject : Measuring the Impact of Scholarship (12/15/2016).. 12/15/2016 05:32:52 PM 
Marcie Granahan
Posts: 11
Location
Despite scholarly communication’s “love-hate relationship” with rankings, impact metrics aren’t going away anytime soon. Last week Elsevier announced CiteScore, a free citation reporting service for the academic community that provides a journal metric that is calculated very similarly to the Impact Factor metric reported in Clarivate’s (formerly Thomson Reuters) annual Journal Citation Reports [see full article here].

Based on roughly 22,000 Scopus sources, the CiteScore metric includes twice as many journals and primarily differs from Impact Factor in that it:
  • includes non-research materials equally weighed in its citation calculation;
  • is based on citations made in a given year to documents published in the past three years (as opposed to the past two years for the Impact Factor);
  • provides free access to citation and document counts; and
  • is calculated monthly.
Since its release, CiteScore has come under criticism, most notably the potential for conflict of interest (i.e., Is there a conflict of interest if a journal publisher controls a metric by which journals are assessed?) and for its model for citation calculation. Many esteemed, high Impact Factor journals perform very poorly under the CiteScore metric, which tends to be biased against journals that produce lesser-cited ‘non-research’ materials, such as editorials, news, letters and other front-matter content [see full article here].

While CiteScore and the Impact Factor are helpful—albeit imperfect—forms of measurement for editors and publishers, are such measures achieving the desired effect on producing excellent science and scientific breakthroughs? Often the focus is on whether the research is published in prestigious journals, instead of going through the more costly and time-consuming process of checking the actual content of the publications themselves.

With Europe focused on the Research Excellence Framework (REF), and public funding tied to research excellence, many countries have implemented measures to quantify excellence in their national science systems.

Slovenia is one of the EU countries that has adopted quantitative metrics that score and evaluate the research outputs of nearly all Slovenian scientists. Using an automated algorithm, the SICRIS (Slovenian Current Research Information System) produces a quantitative score for individual researchers based on research effectiveness, citation successfulness, and project/funding successfulness [see full article here]. But even this system is not without flaws and criticism.

Ultimately, while CiteScore and the Impact Factor focus on journal prestige, article-level metrics and alternative metrics, such as Almetrics, EBSCO’s PlumAnalytics, and Impactstory, look at the full impact of an article, including downloads, views, inclusion in collaboration tools, and social media sharing. Article-level and alternative metrics complement traditional citation analytics, and support approaches to measuring the impact of research [see full article here].

Even Elsevier concedes that CiteScore is just one measurement in what it calls its “basket of metrics”, and plans to add alternative metrics and usage metrics, as well as explore whether it can quantify the quality of a journal’s peer-review process [see full article here].

Clearly, there is much work left to be done in measuring the impact of scholarship. While peer review remains a key element in the evaluation and selection of excellent research, metrics can be used to provide comparability and transparency, but they might not necessarily measure and promote excellence in science and research and in the achievement of new scientific discoveries and new original results.


log%20in.png
Last Edited On: 12/15/2016 05:36:24 PM By Marcie Granahan
 
# of Topics per Page