There is a
pressing need to improve the ways in which the output of scientific research is
evaluated by funding agencies, academic institutions, and other parties.
To address
this issue, a group of editors and publishers of scholarly journals met during
the Annual Meeting of The American Society for Cell Biology (ASCB) in San Francisco , CA ,
on December 16, 2012. The group developed a set of recommendations, referred to
as the San Francisco Declaration on Research Assessment. We invite
interested parties across all scientific disciplines to indicate their support
by adding their names to this Declaration.
The outputs
from scientific research are many and varied, including: research articles
reporting new knowledge, data, reagents, and software; intellectual property;
and highly trained young scientists. Funding agencies, institutions that employ
scientists, and scientists themselves, all have a desire, and need, to assess
the quality and impact of scientific outputs. It is thus imperative that
scientific output is measured accurately and evaluated wisely.
The Journal
Impact Factor is frequently used as the primary parameter with which to compare
the scientific output of individuals and institutions. The Journal Impact
Factor, as calculated by Thomson Reuters, was originally created as a tool to
help librarians identify journals to purchase, not as a measure of the
scientific quality of research in an article. With that in mind, it is critical
to understand that the Journal Impact Factor has a number of well-documented
deficiencies as a tool for research assessment. These limitations include: A)
citation distributions within journals are highly skewed; B) the properties of
the Journal Impact Factor are field-specific: it is a composite of multiple,
highly diverse article types, including primary research papers and reviews; C)
Journal Impact Factors can be manipulated (or "gamed") by editorial
policy; and D) data used to calculate the Journal Impact Factors are neither
transparent nor openly available to the public.
Below we
make a number of recommendations for improving the way in which the quality of
research output is evaluated. Outputs other than research articles will grow in
importance in assessing research effectiveness in the future, but the
peer-reviewed research paper will remain a central research output that informs
research assessment. Our recommendations therefore focus primarily on practices
relating to research articles published in peer-reviewed journals but can and
should be extended by recognizing additional products, such as datasets, as
important research outputs. These recommendations are aimed at funding
agencies, academic institutions, journals, organizations that supply metrics,
and individual researchers.
A number of
themes run through these recommendations:
- the need to eliminate the use
of journal-based metrics, such as Journal Impact Factors, in funding,
appointment, and promotion considerations;
- the need to assess research on
its own merits rather than on the basis of the journal in which the
research is published; and
- the need to capitalize on the
opportunities provided by online publication (such as relaxing unnecessary
limits on the number of words, figures, and references in articles, and
exploring new indicators of significance and impact).
We
recognize that many funding agencies, institutions, publishers, and researchers
are already encouraging improved practices in research assessment. Such steps
are beginning to increase the momentum toward more sophisticated and meaningful
approaches to research evaluation that can now be built upon and adopted by all
of the key constituencies involved.
The
signatories of the San Francisco Declaration on Research Assessment support
the adoption of the following practices in research assessment.
General
Recommendation
1. Do not
use journal-based metrics, such as Journal Impact Factors, as a surrogate
measure of the quality of individual research articles, to assess an individual
scientist's contributions, or in hiring, promotion, or funding decisions.
For
funding agencies
2. Be
explicit about the criteria used in evaluating the scientific productivity of
grant applicants and clearly highlight, especially for early-stage
investigators, that the scientific content of a paper is much more important
than publication metrics or the identity of the journal in which it was published.
3. For the
purposes of research assessment, consider the value and impact of all research
outputs (including datasets and software) in addition to research publications,
and consider a broad range of impact measures including qualitative indicators
of research impact, such as influence on policy and practice.
For
institutions
4. Be
explicit about the criteria used to reach hiring, tenure, and promotion
decisions, clearly highlighting, especially for early-stage investigators, that
the scientific content of a paper is much more important than publication
metrics or the identity of the journal in which it was published.
5. For the
purposes of research assessment, consider the value and impact of all research
outputs (including datasets and software) in addition to research publications,
and consider a broad range of impact measures including qualitative indicators
of research impact, such as influence on policy and practice.
For
publishers
6. Greatly
reduce emphasis on the journal impact factor as a promotional tool, ideally by
ceasing to promote the impact factor or by presenting the metric in the context
of a variety of journal-based metrics (e.g., 5-year impact factor, EigenFactor,
SCImago, h-index, editorial and publication times, etc.) that provide a richer
view of journal performance.
7. Make
available a range of article-level metrics to encourage a shift toward
assessment based on the scientific content of an article rather than
publication metrics of the journal in which it was published.
8.
Encourage responsible authorship practices and the provision of information
about the specific contributions of each author.
9. Whether
a journal is open-access or subscription-based, remove all reuse limitations on
reference lists in research articles and make them available under the Creative
Commons Public Domain Dedication.
10. Remove
or reduce the constraints on the number of references in research articles,
and, where appropriate, mandate the citation of primary literature in favor of
reviews in order to give credit to the group(s) who first reported a finding.
For
organizations that supply metrics
11. Be open
and transparent by providing data and methods used to calculate all metrics.
12. Provide
the data under a licence that allows unrestricted reuse, and provide
computational access to data, where possible.
13. Be
clear that inappropriate manipulation of metrics will not be tolerated; be
explicit about what constitutes inappropriate manipulation and what measures
will be taken to combat this.
14. Account
for the variation in article types (e.g., reviews versus research articles),
and in different subject areas when metrics are used, aggregated, or compared.
For
researchers
15. When
involved in committees making decisions about funding, hiring, tenure, or
promotion, make assessments based on scientific content rather than publication
metrics.
16.
Wherever appropriate, cite primary literature in which observations are first
reported rather than reviews in order to give credit where credit is due.
17. Use a
range of article metrics and indicators on personal/supporting statements, as
evidence of the impact of individual published articles and other research
outputs.
18.
Challenge research assessment practices that rely inappropriately on Journal
Impact Factors and promote and teach best practice that focuses on the value
and influence of specific research outputs.
No comments:
Post a Comment