The use of AI technologies in scholarly publishing has brought about profound changes in areas such as editorial workflows, data analysis and plagiarism detection. Tasks that were previously required of humans can now be more efficiently fulfilled by AI tools like AI algorithms and Large Language Models (LLMs). Multiple academic publishers, such as Taylor and Francis, have signed large deals with AI programming companies to licence their material to assist in the training of AI tools like LLMs. The overall purpose is to develop AI tools to be more accurate and efficient for the future of scholarly publishing, anticipating that AI will become a significant part of the publishing world and stressing the safeguarding of material as integral to using AI responsibly. Indeed, there are significant risks in using AI, which, just as these publishers have signed up to, require full development and testing before being integrated into the publishing process. These are the crucial steps necessary before increasing global access to AI tools. AI Policy and ethical considerations will be integral to retaining integrity and transparency.
However, there still exists a scepticism about using AI in academic publishing. For example, AI tools have been created to assist peer review, the process responsible for ensuring the integrity and accuracy of articles. The peer review process is often critically assessed because of the duration of time it takes to complete. Submission-review duration, revision requests and author resubmission all contribute to the lengthy process – a process essential to maintaining the high-quality standard of a journal. Furthermore, reviewers are anonymous and often not paid to make their reports, so the delivery of reviews can take some time. For these reasons, publishers are seeking out ways of making the peer review process more efficient through AI.
So how can AI be implemented into the peer review process?
Stat Reviewer is an AI tool that analyses the integrity of statistics and methodologies in scholarly manuscripts, automating the detection of reportage errors and inconsistencies. It can also detect potential indicators of fraudulent behaviour and ensure compliance with submission guidelines – administrative tasks previously fulfilled by humans. The selection of reviewers can be influenced by AI flagging potential review bias, conflict of interest and lack of diversity. Manuscript Assessment Systems can determine the potential impact of a paper through data-driven insights and recognise the quality of an article based on historical data. This is because AI tools use algorithms trained to recognise quality indicators and inconsistencies. These mechanisms allow publishers to avoid editor burnout, heavy workloads and time constraints and create a smoother, more efficient publishing process. Not only would editors be supported, but authors would receive faster response times and reviewers would be permitted to focus on the more substantial areas of manuscripts that demand expertise knowledge.
What are the risks?
An over-dependence on AI tools can lead to the overlooking of judgments that humans would otherwise be able to make. While AI is adept in more objective areas like data, there are, inevitably, more nuanced areas that require human understanding and intuition. This means there are risks in missing important corrections or quality-checks that humans would otherwise be able to flag, leading to potential issues in judging the integrity and accuracy of a manuscript. Therefore, it is essential that AI is not overly used and without a careful analysis of potential risks and oversights.
The expertise of academics in the field is simply not replicable in machines. AI must not replace human consciousness but enhance it. There is a fine line here that publications must be sensitive to in order to move forward with AI effectively and ethically.
There are also major issues with some LLMs failing to represent diverse demographics. Reports from last year show that LLM GPT-4 perpetuated racial and gender bias in healthcare, which causes gross oversights of the health conditions specific to certain groups.[1] The Lancet Global Health recognises that human intuition is essential to the peer review process: ‘A crucial element of peer review that cannot be replicated by AI is human perspective’.[2] They go on to state that they will continue to value their human-led peer review process as an ‘irreplaceable cornerstone of our editorial approach’.[3]
With its growing use in the scholarly publishing world, AI is bringing about more efficient and time-effective publishing processes, which could be massively beneficial to the research community. But in recognising the potential oversights AI can make, AI policy will be absolutely integral to preventing possible losses in quality and integrity. Transparency and ethical considerations will also be important to protect intellectual property and allow authors to make trust-worthy decisions.
[1] “Artificial-intelligence-based peer reviewing: opportunity or threat?”, The Lancet Global Health, 2025, Vol13, Issue 3, e372, Artificial-intelligence-based peer reviewing: opportunity or threat? – The Lancet Global Health.
[2] The Lancet Global Health, Artificial-intelligence-based peer reviewing: opportunity or threat? – The Lancet Global Health.
[3] The Lancet Global Health, Artificial-intelligence-based peer reviewing: opportunity or threat? – The Lancet Global Health.