Research evaluation in context 1: Introducing research evaluation in the Netherlands
How is research being evaluated in the Netherlands? Why in that way? Why would the Dutch want to evaluate research anyway when it is done like that? What is an evaluation really about? No, but really? And how do you compare between….? You don’t? And consequences? Not??
Any conversation about research evaluation in the Netherlands has the risk of developing along this line. The Dutch way of evaluating academic research might not be unique, but it is certainly not common, nor fully understood.
As a member of the working group for the monitoring and further development of the evaluation protocol – and as an employee of CWTS – let me provide insight and context. In a series of blog posts I will focus on the evaluation procedure and the evaluation goals as described in the current protocol for the evaluation of research units. Furthermore, I will focus on the bigger picture and pay attention to the context in which the evaluation protocols have been developed and function.
A brief summary and outlook to upcoming blog posts
One way to summarize the core of the Dutch approach is “evaluation in context.” The Strategy Evaluation Protocol 2021-2027 (SEP 2021-2027) describes the process, methods and aims for the evaluation of academic research units. It stresses that research units are evaluated in light of their own aims and strategy. It also mentions that institutional policies and disciplinary practices are relevant and need to be taken into account. Research units are thus being evaluated in context.
The larger context, in which the protocol has been developed and is used, should be taken into account as well when trying to understand research evaluation in the Netherlands. An evaluation of a research unit is not a stand-alone exercise; the protocol positions the evaluation in the context of ongoing research quality assurance in the research organisation. Changes in the four SEP protocols so far can be understood in the context of trends and developments in academia and research policy. Insight into the landscape and governance of public research organisations in the Netherlands provides context and helps to understand why there are several protocols for the evaluation of public research organisations. Apart from the SEP, there is a protocol (in Dutch) for the universities of applied sciences; another one for some non-academic public research organisations; and ad-hoc protocols for other public research organisations. And why we evaluate research units at all, should be understood in the context of the Higher Education and Research Act. It is the law.
In previous blog posts, colleagues introduced Evaluative Inquiry (I, II, III, and IV). The Evaluative Inquiry method has been put to practice in a number of projects, supporting research units preparing for a SEP evaluation. These blog posts have already provided some insight into this Dutch approach of contextual evaluation. One of these blogposts was named “Evaluating research in context”; very similar to the title of this series.
Evaluating Research in Context
The title of this series is “research evaluation in context.” This is not coincidental. It is a reference to Evaluating Research in Context (ERiC), a project that ran a decade ago in the Netherlands. It was a collaboration between a number of organisations, including the Association of Universities in the Netherlands (VSNU), the Association of Universities of Applied Sciences (currently: Vereniging Hogescholen), the Netherlands Organisation for Scientific Research (NWO), the Royal Netherlands Academy of Arts and Sciences (KNAW) and my previous employer, the Rathenau Instituut.
ERiC was specifically dedicated to the evaluation of societal relevance. The project was positioned in the context of evaluation protocols for academic research (the Standard Evaluation Protocol 2009-2015) and for research at the universities of applied sciences (the Brancheprotocol Kwaliteitszorg 2009-2015). Along the line of these protocols, ERiC stressed that research should be assessed in context. The context in which research units operate differs from one area of research, discipline, or organisation to another. Another context is provided by the mission of the unit; and this will differ as well between units, even if they operate in the same area of research, discipline, or organisation. As mentioned in a study (in Dutch) that informed the development of the first Standard Evaluation Protocol 2003-2009, comparing seemingly similar research units is like comparing “coal with eggs.” Apples and oranges apparently didn’t cover the difference well enough. The consequence of this evaluating research in context is that a standard set of indicators wouldn’t do justice. ERiC, again in line with the protocols, advises units to choose indicators that provide evidence and does justice to the research unit and its context.
Next up: 1 protocol, 3 criteria, 4 aspects
This is a very brief outlook and introduction to research evaluation in context. The next blog post will introduce the current protocol for the evaluation of academic research units, the Strategy Evaluation Protocol 2021-2027. It will present the evaluation criteria and describe four extra aspects that need to be addressed during an evaluation.