Evaluative Inquiry II: Evaluating research in context
We know that academic knowledge production happens in context, yet, when assessing research, we undervalue the influence of stakeholders and organizational contexts on academic output and impact. The second of four blogposts is on evaluating research in context.
In a series of blog posts, we want to introduce the four principles of a new CWTS approach to research evaluation. Since 2017 we have been developing this approach in the context of several projects mainly assisting others with putting together the self-evaluation document of the Standard Evaluation Protocol. On the basis of these experiences, and most notably the projects with the Protestant Theological University and with the University for Humanistics, we want to describe the four underpinnings of this Evaluative Inquiry: open-ended concept of research value; contextualization; mixed-method approach; and a focus on both accounting and learning. This second post focuses on contextualizing value.
The previous post was about how the Evaluative Inquiry doesn’t follow a predetermined understanding of value as performance metrics but investigates value as a quality that comes into being in trajectories from research ambitions and organization to reception and use in the larger world. This post focuses on the two contexts of these trajectories, the research organization and the user and stakeholder context.
To start with the latter, when societal relevance was added to the evaluation protocols in the Netherlands and United Kingdom, a rather uniform concept of stakeholders developed in the science policy community as societal recipients of the benefits of fundamental research and secondary audiences of the scientific output. Even if this group was very diverse, from teachers to transportation companies, to the elderly or provincial governments, the relation of these stakeholders to academic work was as relatively passive recipients. Our experiences with the theologians and humanists present a different picture. Theologians and humanistic scholars working on, for example, spiritual care work closely together with practitioners, policy makers or caregivers in the field of spiritual care. Their work addresses questions that stem from professional, policy or academic environments such as moral injury or distress and it is taken up both in professional circles (professional handbooks or policy guidelines for example) and in academic audiences (papers on the spiritual in care). The distinctions between scientific production and societal use become blurry.
The fundamental point here is that at every stage of the knowledge trajectory, from question to answer, from production to communication and reception, from input to output, the scholarly and societal are closely intertwined. What do these collaborations look like, how do these result in particular kinds of output, and how can these collaborations be made visible in evaluative projects? The Evaluative Inquiry aspires to making visible these “productive interactions” within a shared problem space and their particular versions of relevance, excellence, or expertise. As such it builds on other attempts such as the Quality and Relevance in the Humanities system, developed for research evaluation in the humanities, which seeks to broaden the ranges of outputs, use and recognition.
Thinking of value pathways that integrate the societal and the scholarly, and knowledge production and communication and reception, brings into view the academic organization as a crucial context as well. Research evaluation is often concerned with use, reception and impact of academic knowledge considering the organization of the knowledge production, “its viability,” as an afterthought to research value. We contend that the organization of the knowledge production process is crucial for understanding what knowledge is being produced and to what effect.
There are many variables to this organizational context that matter to the specific output, relevance and reception. Organizational histories, publication cultures, teaching versus research obligations, funding sources and epistemic cultures all influence the processes and practices of academic research. Research organizations often bring together (sub)disciplines with differing epistemic commitments. These groups (such as development studies versus anthropology, humanists versus social scientists, or denominational loyalties in theology) do not have the same ideas about what good knowledge or impact looks like. They serve very different audiences and tend to debate the quality and relevance of books and articles that the other side of these commitments produces.
Moreover, understanding the logics of organizations and how they have distributed collective tasks makes us realize that value and relevance require work. Since time is scarce, choices have to be made and priorities have to be set. Our anthropology case suggests that many anthropologists working on contracts of 80% teaching and 20% research, need to be conservative about their time investments. This means that finishing an article trumps one’s desire to participate in experimental collaborations, experimental types of science communications or experimental research directions.
Concerning the discipline of the organization, theology and anthropology are critical and reflective practices. What is value, what is impact, what is transformation, where is society? It is these critical interrogations of terms and the systems they make sense in that define anthropology, theology, and many other SSH domains. This critical interrogation of political and epistemological underpinnings of social forms is a crucial contribution to public discussions, as social science and humanities scholars keenly emphasize. As these contributions are difficult to quantify and make visible within the parameters of the evaluation protocol, they require a different approach to the traditional views of research results ubiquitous in research evaluations.
A contextualized approach to research evaluation requires a smart combination of evaluation tools and methods. In our next post, we will explain how Evaluative Inquiry tailors its use of different qualitative and quantitative methods to the research group under evaluation.