โWhat weโre essentially seeing right now is the equivalent of a denial-of-service attack. Real researchers drowning in noise,โ one expert said.
Academics and cybersecurity professionals warn that a wave of fake scientific research created with artificial intelligence (AI) is quietly slipping past plagiarism checks and into the scholarly record. This phenomenon puts the future credibility of scientific research at risk by amplifying the long-running industry of โpaper-millโ fraud, experts say.
Academic paper millsโfake organizations that profit from falsified studies and authorshipโhave plagued scholars for years and AI is now acting as a force multiplier.
Some experts believe structural changes are needed, not just better plagiarism checkers, to solve the problem.
The scope of the problem is staggering, with more than 10,000 research papers retracted globally in 2023, according to Nature Portfolio.
Manuscripts fabricated using large language models (LLMs) are proliferating across multiple academic disciplines and platforms, including Google Scholar, the University of Borรฅs found. A recent analysis published in Nature Portfolio observed that LLM tools including ChatGPT, Gemini, and Claude can generate plausible research that passes standard plagiarism checks.
In May, Diomidis Spinellis, a computer science academic and professor at the Athens University of Economics and Business, published an independent study of AI-generated content found in the Global International Journal of Innovative Research after discovering his name had been used in a false attribution.
Spinellis noted that just five of the 53 articles examined with the fewest in-text citations showed signs of human involvement. AI detection scores confirmed โhigh probabilitiesโ of AI-created content in the remaining 48.
In an analysis of AI-generated โjunkโ science published on Google Scholar, Swedish university researchers identified more than 100 suspected AI-generated articles.
Google did not respond to The Epoch Timesโ request for comment.
The Swedish study authors said a key concern with AI-created researchโhuman-assisted or otherwiseโis that misinformation could be used for โstrategic manipulation.โ
โThe risk of what we call โevidence hackingโ increases significantly when AI-generated research is spread in search engines. This can have tangible consequences as incorrect results can seep further into society and possibly also into more and more domains,โ study author Bjรถrn Ekstrรถm said.
Moreover, the Swedish university team believes that even if the articles are withdrawn, AI papers create a burden for the already hard-pressed peer review system.







