GAIDeT: a practical taxonomy for declaring AI use in research and publishing
Transparency of AI use in academia matters for authors, editors, reviewers, readers and repository moderators. This blog post introduces GAIDeT, a taxonomy for the structured disclosure of Generative AI (GAI) use in research and how it builds trust without adding any extra burden to stakeholders.
Generative Artificial Intelligence (GAI) tools like ChatGPT are increasingly finding their way into research and scholarly publishing. This trend brings a pressing challenge: how do academics clearly disclose the use of AI in their research workflows? Right now, many disclosures are either too vague (e.g. "We used ChatGPT to improve clarity") or missing entirely. Such a lack of precision can undermine transparency and reproducibility, making it harder for readers, reviewers, and editors to assess how AI contributed to the work. In response to this challenge, we developed a new approach to standardise how researchers communicate their use of AI.
What is GAIDeT?
GAIDeT stands for Generative Artificial Intelligence Delegation Taxonomy. This framework was created to help researchers formally describe any assistance they received from AI in the course of their research or publishing processes. Unlike other approaches, such as the CRediT taxonomy (focused on human author roles) or the NIST AI Use Taxonomy (covering AI functions in general domains), GAIDeT is designed specifically for documenting the delegation of tasks to AI within research workflows. It combines the stage of research with the precise role AI played, while ensuring that responsibility always stays with the human researcher. GAIDeT provides a structured checklist for disclosing what was delegated to AI, at which stage, how the AI’s output was used, and the version of AI that was used.
How does GAIDeT work?
GAIDeT divides the research and publication processes into distinct stages and roles where AI might be involved. Authors are asked to identify the stage in which the AI tool was used – for instance, during idea generation, literature search, manuscript writing, or even data analysis. Stating the stage helps to clarify at what point in the workflow AI contributed. They are also encouraged to describe AI’s role, such as preliminary hypothesis testing, research design, summarising text, translation, bias analysis, etc. By combining both the stage and the role, GAIDeT disclosures make the extent of AI assistance transparent. This approach moves beyond a simple “I/we used AI” or “I/we did not use AI” statement and supports more meaningful discussions about ethics, authorship, and accountability.
The GAIDeT Declaration Generator
To make adopting this taxonomy easier, we developed a GAIDeT Declaration Generator. This is an online tool that guides authors through the disclosure process step by step. It asks the researcher simple questions about how they used AI, covering all the GAIDeT categories, and then it automatically produces a standardised declaration text ready to be included in a manuscript. In practice, this removes the need for authors to write the disclosure from scratch, without having to worry about whether they have formatted it correctly, because the tool completes this task for them. The statement can also be modified if necessary to exclude unnecessary information or to accommodate other details.
Why structured AI disclosure matters
Some journals have started requiring AI use to be declared and even suggesting templates (see Elsevier). But in most cases, researchers still add them voluntarily and often in vague or inconsistent ways. GAIDeT offers a structured and transparent disclosure that captures all details, making AI use easier to understand and more meaningful rather than generic. A structured AI disclosure is not merely bureaucratic box-checking – it has real benefits for the research community:
- For authors: It provides a standard way to demonstrate responsible and transparent AI use. Much like how researchers include conflict-of-interest statements or funding acknowledgements, an AI disclosure shows that they are being open about their AI-based tools and assistants. This can strengthen the credibility of their work.
- For editors: By using GAIDeT, editors can more clearly distinguish between legitimate AI assistance and any inappropriate
outsourcing of academic work. In other words, it is easier to tell if an
author used AI as a helpful tool, versus relying on AI to do work that
the authors should have done themselves.
- For reviewers: It offers insight into methodological choices made during the study. For example, if an AI tool was used to analyse data or draft a section of the paper, the reviewer will be aware of this and can evaluate the work with that context in mind. This transparency can improve the peer review process.
- For repository moderators: By using GAIDeT, repositories can better decide how to handle AI-assisted outputs. Clear statements make it easier to assess compliance with self-archiving policies and to maintain trust in shared collections.
- For readers: Clear disclosure also matters for the wider audience of a paper. Readers can better understand how much of the work was supported by AI and in what way, which helps them to interpret findings with the right context. Unlike editors, repository moderators or reviewers, they do not use this information to make publication decisions, but it can strengthen their trust in the research and reassure them that AI has been used responsibly.
In short, GAIDeT turns AI disclosure from a generic statement into a clear signal of responsible use, helping to build trust across the research community. At the same time, structured disclosure is not a magic solution. GAIDeT makes AI use more transparent, but it cannot guarantee verification
A tool for clarity, not enforcement
One key point is that GAIDeT is not about policing researchers – it’s about empowering them. The goal of this framework is to encourage openness regarding AI, without adding undue burden on authors or editorial teams. Using GAIDeT should feel similar to other standard disclosures, not an extra hurdle. It is meant to help honest researchers clearly communicate their use of AI, and not to catch anyone out. Its value lies in encouraging transparency, not in enforcing compliance
Conclusion: Embracing transparency and trust with GAIDeT
As GAI’s capabilities (and the surrounding controversies) continue to evolve rapidly, the research community needs a common language to talk about how these tools have been used. GAIDeT offers exactly that. By adopting structured AI disclosures, whether the user is a journal editor, a reviewer, an author, or someone depositing their work under self-archiving policies, this tool allows them to take a small but meaningful step toward restoring the clarity and trust in research practices.
In an era of fast-paced innovation, embracing GAIDeT is a way to be and remain responsible. It encourages a culture where using AI is acceptable (though still debated) as long as the user is transparent about it. We invite researchers to try accommodating GAIDeT in their own published work. By doing so, they would confidently show their commitment to responsible AI use, thereby upholding the integrity of their research.
AI Contribution Disclosure: The authors used ChatGPT-4 for translation, proofreading, and editing.
Header image: Anne Nygård on Unsplash
0 Comments
Add a comment