Research(er) assessment that considers open science

Research(er) assessment that considers open science

A key challenge in the reform of research assessment is to recognise and reward the adoption of open scientific practices. In this post, Anestis Amanatidis reflects on this challenge and invites you to join the conversation in bimonthly online meetings.

Research assessment practices that largely rely on publication-driven assessments of research(ers) are slowly running out of steam. A remnant of a science system that is largely inward-focused and output-oriented, these assessments paint a rather monochrome picture of science that is not fit for today’s developments that reconfigure the relationship between science and society.

One such reconfiguration that enjoys a lot of attention at the moment comes under the banner of open science. It has become a powerful device for reconfiguring scientific research and is currently the centre of immense policy attention on all levels:

UNESCO describes it as a policy framework to address existing inequalities that are produced through science, technology and innovation vis-á-vis environmental, social and economic challenges in their Recommendation on Open Science. The text seems to be bearing considerable hope for fostering science practices that are open, transparent, collaborative and inclusive by valuing quality and integrity, collective benefits, equity and fairness, and diversity and inclusiveness as core values.

Also, the European Commission embraced open science as a policy priority with considerable investment in the development of the ‘European Open Science Cloud’ as a central platform to publish, find and reuse research assets. Simultaneously, European funding prioritises open science practices in the form of sharing of publications and data ‘as open as possible and as closed as necessary’.

Indeed it seems that these policy investments do carry fruits as can be seen in the formation of national bodies, like Open Science Netherlands, that were heralded as supporting the advancement of open science on country-level or the evident inclusion of open science in university strategies.

Nonetheless, open science is also a terribly ambiguous ‘umbrella term’ that is enacted in multiple ways across different research communities. Conceptions range from purely output-oriented notions surrounding open access to process-oriented priorities, such as engaging non-academics in knowledge production. As my colleagues Ismael Rafols and Ingeborg Meijer summarise in their blogpost on monitoring open science:

"A quick look at policy documents reveals striking diversity and ambiguity in the focus and scope of OS initiatives. Different stakeholders emphasise different goals: increasing accessibility or efficiency, fostering flows across academic silos, engaging non-academics, democratising science within and across countries…"

When relating this ambiguity to research(er) assessment, it becomes evident that the proliferation of open science practices challenges the monochromatic properties of publication-driven research(er) assessments. This urges research assessment to be reconsidered in the light of open science. Luckily, there has been much work in making research(er) assessment more polychromatic. This has been spearheaded by initiatives such as DORA, has stabilised with the formulation of the Hong Kong Principles and recently has institutionalised in the Coalition for Advancing Research Assessment (COARA); a massive undertaking that supports the adoption of responsible research assessment practices across knowledge producing organisations. Indeed, CoARA’s description on their webpage summarises briefly what responsible research assessment means for them:

"Our vision is that the assessment of research, researchers and research organisations recognises the diverse outputs, practices and activities that maximise the quality and impact of research. This requires basing assessment primarily on qualitative judgement, for which peer review is central, supported by responsible use of quantitative indicators."

Similarly to open science, responsible research assessment describes a broad range of aspirations in research assessment that largely focus on the responsible use of research metrics in assessment contexts and the fostering of an inclusive and diverse research culture in science through assessment. These developments in research assessment recognise and promote practices that paint a more polychromatic picture of science, where research(ers) are recognised and rewarded for more than the outputs they produce.

Both developments – open science and responsible research assessment – have a unique opportunity to strengthen and reinforce one another. Research assessment reform can provide the institutional incentives to the adoption of open scientific practices in research, and the diversity of practices promoted under open science can serve the reform of research(er) assessment. However, despite this promising interrelation, both movements risk developing in parallel to one another as long as questions about the aspirations of open science are not discussed with the affordances of responsible research assessment in mind.

Indeed, there is ongoing work to illuminate these complexities in the form of various European Commission-funded projects. How research(er) assessment that considers opens science can play out in practice is being researched by GraspOS, a project which is concerned with how infrastructures afford the uptake of research assessment that values open science. The project OPUS focuses on indicators for the assessment of researchers in the context of open science. PathOS tries to better understand the impacts of open science. Similarly, the SUPER MoRRI project developed an evaluation and monitoring framework for responsible research and innovation in Europe that, in many ways, relates to reconfigurations in knowledge production as posited under the banner of open science as well.

Early reflections from these projects and related communities can be brought forward with Sabina Leonelli’s recent book (2023). She describes open science as backed by a rationale that relies on a strong orientation towards the sharing of objects of science. These objects do not necessarily describe how research is done, but are themselves (by)products of research, including data, publications, models, codes et cetera. In this notion of science, access to existing scholarship and related objects becomes conditional for successful open science.

Indeed, this notion of open science is widely shared and is often reinforced in research(er) assessments that consider open science through the enrolment of object-oriented knowledge that comes in the form of indicators describing e.g., open access outputs. However, these research assessments only make visible a very narrow interpretation of openness in science and obscure descriptions of how research is done – which is a key consideration for recognising and rewarding researchers on responsible conduct of research, transdisciplinary work and other ‘broader’ interpretations that can be found in the UNESCO conceptualisation of openness; and often remain invisible.

This gives rise to interesting questions about capturing immaterial properties of research, in particular when it comes to shared knowledge production processes and how they reconfigure values and relations or bring up matters of collective concern. These dimensions of research are still widely underrepresented and unaddressed in research(er) assessments in the context of (open) science.

Conclusively, research(er) assessment that considers open science is still in a nascent stage. It is therefore critical to collate and exchange experiences of practical attempts to conduct research assessment that considers open science. At GraspOS, we will ask these questions in bimonthly online meetings starting 18 October 2023 and present ongoing stories from universities, national funders, thematic research clusters to discuss the multiple ways in which (responsible) research assessment considers open science.

We hope to hear stories about issues, frustrations and successes of research assessment in relation to open science. Finally, the goal of this community is to create a bouquet of stories from which we can learn and draw inspiration for our own research assessments. If this is something of interest to you, please feel free to register here and share this post.

0 Comments

Add a comment