Scientific Paper Slop: How AI Is Forcing Publishing to Rethink Research Integrity
The machinery that protects scientific quality was built for a world with friction. It took time to write a paper. It took effort to submit one. It took resources to launch a journal.
That friction is collapsing — and the implications are only beginning to surface.
In our latest episode of Practical AI in Healthcare, we spoke with Tiffany Leung, MD, MPH, the Scientific Editorial Director at JMIR Publications, about how generative AI is reshaping the landscape of scientific publishing. Her perspective is shaped by years at the intersection of clinical medicine and editorial practice, and her team is on the front lines of a rapidly evolving challenge.
The Perfect Storm
Dr. Leung described a troubling convergence: academic incentive structures that reward volume over impact, combined with tools that make content generation nearly effortless.
“Combine that mindset and the incentive structures with technology that is available,” she said. “And you kind of have this perfect storm of lots of… AI slop. There’s the potential for a lot of scientific paper slop.”
The term is evocative — and intentionally so. It names a phenomenon that’s been creeping into editorial inboxes: submissions that are technically formatted correctly but lack substance, rigor, or genuine contribution. When production costs approach zero, the filtering burden shifts entirely to reviewers and editors.
Three Guardrails That Scale
Rather than attempt to police specific tools, JMIR adopted a principle-based framework: Accountability (humans remain responsible for all content), Confidentiality (don’t expose unpublished work to third-party models), and Transparency (disclose when in doubt).
What’s notable about this approach is its technology-agnosticism. It doesn’t require defining what counts as “AI” — a moving target if ever there was one. Instead, it focuses on behavior and responsibility.
Editorial Decision Support
One of the more striking parallels Dr. Leung drew was between clinical decision support and emerging research integrity tools. Platforms like Signals and the STM Integrity Hub now flag potential concerns — duplicate submissions, citation anomalies, authors with retraction histories — for human review.
“These tools provide information for decision support,” she explained. “The idea is to identify more information that might not otherwise be revealed to you.”
It’s the same logic that governs well-designed clinical AI: surface signals, preserve human judgment, avoid automation bias. Publishing is now running this experiment at scale.
What’s Next: From Papers to Knowledge Atoms
Looking ahead, Dr. Leung pointed to structural changes that could reshape how we think about scientific knowledge itself. One emerging concept: breaking papers into citable component parts — methods, datasets, findings — that can be linked and credited independently.
It’s a shift from monolithic articles to what might be called atomic knowledge structures: modular, traceable, and better suited to an era of machine-readable science.
Whether these models gain traction remains to be seen. But the conversation is happening — and it’s one that affects anyone who produces, consumes, or depends on scientific evidence.