Navigating Responsible Research Assessment Guidelines

Navigating Responsible Research Assessment Guidelines

Responsible Research Assessment is discussed and used in many contexts. However, Responsible Research Assessment does not have a unifying definition, and likewise its guidelines indicate that the implementation of Responsible Research Assessment can have many different scopes.

Research assessment has a long history continuously introducing new methods, tools, and agendas, for example, peer review of publications dating back to 17th century and catalogues from the 19th century that facilitated publication counting. This blog post discusses Responsible Research Assessment (RRA), an agenda gaining attention today. The blog post gives an introduction to RRA and discusses how to navigate RRA guidelines, which can be a complex task.

What is Responsible Research Assessment (RRA)?

A search for definitions of RRA resulted in:

Two definitions focus on principles for working with metrics, and the third on supporting diverse and inclusive research culture through research assessment. All are valid, and one unifying definition is still lacking. Also, the terminology varies. Assessment and evaluation or metrics and indicators are used interchangeably.

It can be difficult to pinpoint exactly what RRA is and getting an overview of RRA is equally complex. One approach, however, is RRA guidelines that explain RRA and guide its implementation. Some internationally well-known RRA guidelines are: San Francisco Declaration on Research Assessment (DORA), Leiden Manifesto, Hong Kong Principles, and SCOPE.

A common starting point for RRA and its guidelines is that traditional quantitative research assessment and its emphasis on bibliometric indicators may be easy to apply but has many biases. Criticism of traditional indicators is seen, for example, in DORA leading to the recommendation not to use Journal Impact Factors for research funding, appointment, and promotion considerations.

Traditional quantitative research assessment is indicator- or data-driven, meaning that popular indicators—for example the Journal Impact Factor—or easily available data are the staring points of assessments. Instead, RRA focuses on the entity to be assessed and starts with what seems to be lacking in the traditional quantitative research assessments, for example, the values (cf. SCOPE), or the research integrity (cf. Hong Kong Principles) of the entity to be assessed.

Who use RRA guidelines?

Universities’ adoption of RRA guidelines is relatively new, and many universities use DORA or Leiden Manifesto, sometimes to develop local RRA policies. It is possible to endorse DORA and Hong Kong Principles at their websites, and the long lists of signatories show that not only universities but also other institutions from the research sector support RRA, for example, funders, publishers, learned societies, governmental agencies, etc. Also, individuals are among the signatories.

RRA guidelines are not only relevant at the individual, local, and national level. The European Commission has published an agreement on how to reform research assessment. The RRA guidelines contribute to the basis for the reform, and the guidelines are among the tools for the practical implementation of the reform.

What are the scopes of RRA guidelines?

For institutions or individuals new to RRA, it can be difficult to navigate the guidelines. Which guidelines are relevant? What are the scopes of the guidelines? How are the guidelines applied? Etc.

To answer these questions, the Evaluation Checklists Project Charter and its Criteria for Evaluation Checklists is useful. The criteria are developed by experts from evaluation research with the mission to “advance excellence in evaluation by providing high-quality checklists to guide practice” and the vision ”for all evaluators to have the information they need to provide exceptional evaluation service and advance the public good”.

Using the criteria RRA guideline addresses one or more specific evaluation tasks and RRA guideline clarifies or simplifies complex content to guide performance of evaluation tasks, it becomes apparent that the four guidelines mentioned earlier differ in their scopes: SCOPE aims to improve the assessment process, Hong Kong Principles wants to strengthen research integrity, Leiden Manifesto stresses accountability in metrics-based research assessment, and DORA focuses on assessment of research publications but also other types of output. (See also this poster from the Nordic Workshop on Bibliometrics and Research Policy).

How easy are RRA guidelines to use?

Above it is shown that the first criteria can help understand the scope of a RRA guideline. Whether a guideline is easy to use may be addressed by the next sections of Criteria for Evaluation Checklists: Clarity of Purpose, Completeness and Relevance, Organization, Clarity of Writing, and References and Sources.

Especially, the criteria on Clarity of Purpose address how to use a checklist. Not all four RRA guidelines discussed here are clear on all of these criteria, i.e., the process of applying the guidelines instead of simply the result of using the guidelines. Here are some examples of how to meet these criteria and, thus, help the user applying the guideline:

SCOPE discusses the criterion The circumstances in which it [the guideline] should be used and concludes that research assessment and, thus, the use of SCOPE is not always the right solution. Assessment is not recommended to incentivize specific behaviours. For example, open access publishing would benefit more from making it easy for a researcher to comply than from measuring the researcher’s share of open access publications.

DORA addresses the criterion Intended users. The sections in the guideline mention intended users. The users are funding agencies, research institutions, publishers, organizations that supply metrics, and researchers.

Leiden Manifesto and Hong Kong Principles, respectively, have relatively clear purposes because of their delimited scopes; accountability in metrics-based research assessment and strengthen research integrity.

The criteria sections Completeness and Relevance, Organization, Clarity of Writing, and References and Sources further review how well a guideline supports the RRA process. For example, the four guidelines provide illustrative examples and cases, but all aspects of an assessment task are not necessarily covered. And the guidelines are organized in sections, but it is not always clear how this organization supports the RRA process.


RRA does not have a clear definition, and RRA guidelines can be difficult to apply. The Criteria for Evaluation Checklists provides a tool developed by evaluation researchers that can help users choose relevant RRA guidelines for their work. Applying the understanding of RRA guidelines constituted by the Evaluation Checklists Project Charter may also facilitate a systematic analysis of RRA guidelines that could lead to a clearer definition of RRA.


This work was supported by a travel grant from the Danish Association for Research Managers and Administrators. I wish to thank the participants at the 27th Nordic Workshop on Bibliometrics and Research Policy. Their comments on my poster have served as inspiration for this blog post. Furthermore, discussions with Jon Holm, Special Adviser from Research Council of Norway, have helped define the scope.


Add a comment