Scientifically Speaking: How AI is invading scientific journals
These days, much online prose feels generic—flattened voice, repeated language, and similar punctuation. AI may now be infiltrating science journals
These days, so much prose online is generic. There are telltale signs in a flattening of voice, sameness of language, and oddly similar punctuation. It’s in identical LinkedIn posts across profiles and in similar-sounding restaurant and hotel reviews online. It’s quite easy to suspect that Artificial Intelligence (AI) has been used in writing these platitudes. And now we have hard evidence that AI is seeping into science journals at an alarming rate.
A Spanish malaria researcher, Carlos Chaccour, recently published a clinical trial on using ivermectin to control mosquito-borne infections. Two days later, The New England Journal of Medicine received a sharply worded letter accusing him of ignoring key research. There was only one problem: the “ignored” papers were both written by him, and they said nothing of the sort. The letter looked polished and confident but was remarkably wrong.
In science, letters to the editor are a peculiar currency. They are indexed and citable, which means they carry weight in academia. They require no new data or lengthy experiments, only argument and formatting. In the “publish or perish” economy, they are the smallest coin in circulation, but letters to journals like The Lancet and The New England Journal of Medicine carry enormous prestige.
Curious as to what was going on, Chaccour searched a database that indexes medical and scientific content for the letter’s author. Up until 2024, this person had not published a single letter to the editor. Then in 2025, he published 84. Chaccour and colleagues then performed a large study, which they posted as a preprint on Research Square.
Chaccour and his colleagues downloaded every letter to the editor indexed between 2005 and September 2025 (all in all, around 730,000 letters) and examined who was writing them. They found three distinct spikes. The first was in 2013 and could be explained by how letters were indexed in databases. The second spike was in 2020, during the pandemic, when researchers during lockdowns had much more time to write letters to the editor. The third spike was in 2023, just as generative AI chatbots became widely available.
The numbers themselves tell a story. 7,945 authors who had published almost no letters between 2005 and 2022 suddenly jumped to the top 5% of productivity after 2023. Together, they produced 22,826 letters, about 22% of all letters published since 2023. Their reach was astonishing. The same small group wrote letters in nearly 1,930 journals, including 175 in The Lancet and 122 in The New England Journal of Medicine. Those who had never written a letter before 2023 were suddenly publishing five or ten letters in their debut year, a jump of 376% over pre-AI years.
Chaccour’s team also found that one author had written letters spanning dozens of specialties, indicating a kind of expertise that no single human could possess. AI content of this nature requires no effort, expertise, or money to generate.
In contrast, real science takes time. “It took me six years and $25 million [in grant funding] to put out that [NEJM] paper,” Chaccour mentions in an interview with Science.
Our competitive world has limited space. True experts have limited attention. When a handful of unscrupulous writers start flooding journals with garbage, actual scholarship is diminished. The researchers used economic metrics to measure this. The Gini coefficient for letter authorship rose nearly 80% in twenty years, from 0.13 to 0.23, and the Herfindahl-Hirschman Index, which tracks concentration, doubled after 2023. What this means is that a few possibly synthetic voices are drowning out genuine ones.
Chaccour and his team have some suggestions to the problem. First, they recommend authors disclose how many letters they have published recently and turn letters sections into moderated forums that do not count toward citation metrics.
But AI is not only generating letters to editors; it is also being used to write reviews and research articles. Some uses of AI, for example to polish language, may be acceptable to scientific journals, but others, like making up data and figures, are clearly fraudulent.
Editors and reviewers at these journals were overworked even before AI. Now, they are besieged by this additional burden. Requiring authors to disclose whether they have used AI to generate letters and penalising authors who fail to disclose would also be a necessary step. But given the sheer scale of the problem, I fear these steps may not be adequate. Humans cannot compete against machines. This is a problem that will plague science for the foreseeable future.
(Anirban Mahapatra is a scientist and author, most recently of the popular science book When The Drugs Don’t Work: The Hidden Pandemic That Could End Medicine. The views expressed are personal.)
These days, so much prose online is generic. There are telltale signs in a flattening of voice, sameness of language, and oddly similar punctuation. It’s in identical LinkedIn posts across profiles and in similar-sounding restaurant and hotel reviews online. It’s quite easy to suspect that Artificial Intelligence (AI) has been used in writing these platitudes. And now we have hard evidence that AI is seeping into science journals at an alarming rate.
A Spanish malaria researcher, Carlos Chaccour, recently published a clinical trial on using ivermectin to control mosquito-borne infections. Two days later, The New England Journal of Medicine received a sharply worded letter accusing him of ignoring key research. There was only one problem: the “ignored” papers were both written by him, and they said nothing of the sort. The letter looked polished and confident but was remarkably wrong.
In science, letters to the editor are a peculiar currency. They are indexed and citable, which means they carry weight in academia. They require no new data or lengthy experiments, only argument and formatting. In the “publish or perish” economy, they are the smallest coin in circulation, but letters to journals like The Lancet and The New England Journal of Medicine carry enormous prestige.
Curious as to what was going on, Chaccour searched a database that indexes medical and scientific content for the letter’s author. Up until 2024, this person had not published a single letter to the editor. Then in 2025, he published 84. Chaccour and colleagues then performed a large study, which they posted as a preprint on Research Square.
Chaccour and his colleagues downloaded every letter to the editor indexed between 2005 and September 2025 (all in all, around 730,000 letters) and examined who was writing them. They found three distinct spikes. The first was in 2013 and could be explained by how letters were indexed in databases. The second spike was in 2020, during the pandemic, when researchers during lockdowns had much more time to write letters to the editor. The third spike was in 2023, just as generative AI chatbots became widely available.
The numbers themselves tell a story. 7,945 authors who had published almost no letters between 2005 and 2022 suddenly jumped to the top 5% of productivity after 2023. Together, they produced 22,826 letters, about 22% of all letters published since 2023. Their reach was astonishing. The same small group wrote letters in nearly 1,930 journals, including 175 in The Lancet and 122 in The New England Journal of Medicine. Those who had never written a letter before 2023 were suddenly publishing five or ten letters in their debut year, a jump of 376% over pre-AI years.
Chaccour’s team also found that one author had written letters spanning dozens of specialties, indicating a kind of expertise that no single human could possess. AI content of this nature requires no effort, expertise, or money to generate.
In contrast, real science takes time. “It took me six years and $25 million [in grant funding] to put out that [NEJM] paper,” Chaccour mentions in an interview with Science.
Our competitive world has limited space. True experts have limited attention. When a handful of unscrupulous writers start flooding journals with garbage, actual scholarship is diminished. The researchers used economic metrics to measure this. The Gini coefficient for letter authorship rose nearly 80% in twenty years, from 0.13 to 0.23, and the Herfindahl-Hirschman Index, which tracks concentration, doubled after 2023. What this means is that a few possibly synthetic voices are drowning out genuine ones.
Chaccour and his team have some suggestions to the problem. First, they recommend authors disclose how many letters they have published recently and turn letters sections into moderated forums that do not count toward citation metrics.
But AI is not only generating letters to editors; it is also being used to write reviews and research articles. Some uses of AI, for example to polish language, may be acceptable to scientific journals, but others, like making up data and figures, are clearly fraudulent.
Editors and reviewers at these journals were overworked even before AI. Now, they are besieged by this additional burden. Requiring authors to disclose whether they have used AI to generate letters and penalising authors who fail to disclose would also be a necessary step. But given the sheer scale of the problem, I fear these steps may not be adequate. Humans cannot compete against machines. This is a problem that will plague science for the foreseeable future.
(Anirban Mahapatra is a scientist and author, most recently of the popular science book When The Drugs Don’t Work: The Hidden Pandemic That Could End Medicine. The views expressed are personal.)
One Subscription.
Get 360° coverage—from daily headlines
to 100 year archives.
Archives
HT App & Website