Scientific research generated by AI threatens both the knowledge of society in general, but also public trust in science: this is the conclusion of researchers from the University of Boras, Sweden, who recently identified more than a hundred articles probably generated by artificial intelligence within the Google Scholar search engine.
According to the authors of this study, therefore, false science manufactured by conversational robots and other AI systems is available and can spread easily, all at low cost, which benefits people with bad intentions.
The research is published in Harvard Kennedy School Misinformation Review.
For Jutta Haider and Björn Ekström, two of the authors of the work, this proliferation of poor quality scientific articles, produced by systems known to “invent” information, represents a danger for society and the scientific community.
Manipulation using false evidence
According to experts, one of the main fears regarding AI-generated research is the increased risk of misappropriation of evidence – that is, the fact that false studies could be used to carry out strategic manipulation.
“This risk increases significantly when AI-generated studies are published on search engines. This can have tangible consequences, as erroneous results can spread across society, and possibly into other areas,” argues Dr Ekström.
In the opinion of the authors of the study, moreover, these problematic articles have already spread to other sectors of the online research infrastructure, in various databases and archives, but also on social media , etc. And even if the articles in question are removed, there is a risk that they have already had time to spread, and will continue to do so.
Worse still, these AI-generated scientific articles increase pressure on the peer review system, the method for verifying whether research is legitimate and viable, a system that is already strained by the amount of studies published in the field.
A need to know better
In the eyes of the authors of this new study, the fact that research generated by AI is spreading within search engines and databases means that it is necessary to know it even better, when It’s about online scientific research.
“If we can’t trust the research we read on the web, then we risk making decisions that are based on incorrect information. But as much as this question represents an issue in terms of scientific misconduct, it is also an issue in terms of media and scientific literacy,” maintains Dr. Haider.
This thus evokes the fact that Google Scholar is not an academic database. This search engine is easy to use, but does not have safeguards to ensure a certain quality of content. This is already a problem with Google’s regular search results, but it is even more of an issue when it comes to making science accessible, it is argued.
“The ability to determine which scientific journals publish quality peer-reviewed studies is essential, and determining what constitutes reliable research is of great importance for decision-making and the ability to form opinions on various topics,” concluded Dr. Haider.
Subscribe to our sprawling newsletter
Encourage us for the price of a coffee
Related News :