This article has been removed at the request of the Editors-in-Chief and the authors because informed patient consent was not obtained by the authors in accordance with journal policy prior to publication. The authors sincerely apologize for this oversight.
In addition, the authors have used a generative AI source in the writing process of the paper without disclosure, which, although not being the reason for the article removal, is a breach of journal policy. The journal regrets that this issue was not detected during the manuscript screening and evaluation process and apologies are offered to readers of the journal.
The journal regrets – Sure, the journal. Nobody assuming responsibility …
deleted by creator
Daaaaamn they didn’t even get consent from the patient😱😱😱 that’s even worse
I mean holy shit you’re right, the lack of patient consent is a much bigger issue than getting lazy writing the discussion.
It’s removed from Elsevier’s site, but still available on PubMed Central: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11026926/#
The worse part is, if I recall correctly, articles are stored in PubMed Central if they received public funding (to ensure public access), which means that this rubbish was paid with public funds.
What, nobody read it before it was published? Whenever I’ve tried to publish anything it gets picked over with a fine toothed comb. But somehow they missed an entire paragraph of the AI equivalent of that joke from parks and rec: “I googled your symptoms and it looks like you have ‘network connectivity issues’”
I think that part of the issue is quantity and volume. You submit a few papers a year, an AI can in theory submit a few per minute. Even if you filter 98% of them, mistakes will happen.
That said, this particular error in the meme is egregious.
Nobody would read it even after it was published. No scientist have time to read other’s papers. They’re too busy writing their own papers. This mistake probably made it more read than 99% of all other scientific papers.
I am still baffled by the rat dick illustration that got past the review
dck
RAT DICK,
RAT DICK,
WHATCHA GONNA DO,
WHATCHAGONNADO WHEN THEY COME FOR YOU.
Raneem Bader, Ashraf Imam, Mohammad Alnees, Neta Adler, Joanthan ilia, Diaa Zugayar, Arbell Dan, Abed Khalaileh. You are all accused of using chatgpt or whatever else to write your paper. How do you plead?
My money is on non-existent. I bet one of those dudes is real, at best.
How do you feel about using chatgpt as a translation tool?
Depends on what kind of translation we’re talking here. Translating some chatter? Translating a web page (most of these suck)? Translating a book for it to be published? Translating a book so you can read it yourself? Translating a scientific paper so you can publish it, without proofreading the translation?
Is it the personal vs. private vs. public use that is bothersome or is it just the fact that these fuckers didn’t proofread I guess is what I’m trying to figure out
They didn’t proofread, plus there’s a real chance that some other parts of the paper might be AI nonsense. If something so glaringly problematic got past, what smaller mistakes are also there? They effectively poisoned their own paper
How do you plead?
“I apologize, but I do not feel comfortable performing any pleas or participating in negative experiences. As an AI language model, I aim to help with document production. Perhaps you would like me to generate another article?”
They mistakenly sent the “final final paper.docx” file instead of the “final final final paper v3.docx”. It could’ve happen to any of us.
deleted by creator
It is astounding to me that this happened. A complete failure of peer review, of the editors, and OF COURSE of the authors. Just absolutely bonkers that this made it to publication. Completely clown shoes.
It keeps happening across all fields. I think we are about to witness a complete overhaul of the publishing model.
Using AI to detect AI uses in research papers : the research paper.
I’ve been saying it to everyone who’ll listen …
the journals should be run by universities as non-profits with close ties to the local research community (ie, editors from local faculty and as much of the staff from the student/PhD/Postdoc body as possible). It’s really an obvious idea. In legal research, there’s a long tradition of having students run journals (Barrack Obama, if you recall, was editor of the Harvard Law Journal … that was as a student). I personally did it too … it’s a great experience for a student to see how the sausage is made.
My field’s too small to have separate journals for each university, but we do have one in the Free Journal Network that’s run by the community
You don’t need one in each University, that wouldn’t scale. There’s be natural specialisations. And journals could even move from University to university as academic personnel change over time.
The main point is that they’re non-profit and run by researchers for researchers.
Wouldn’t you want a pediatric hepatobiliary surgeon? A four month old is going to be a tricky case, I’d think.
the chatbot couldn’t even recommend the right specialist 😑
Probs recommend a ‘Paedophile Hobgoblin’.
what if this was actually just a huge troll, and it wasn’t AI.
Now that would be fucking hilarious.
I started a business with a friend to automatically identify things like this, fraud like what happened with Alzheimer’s research, and mistakes like missing citations. If anyone is interested, has contacts or expertise in relevant domains or just wants to talk about it, hit me up.
What’s the business model? (How does that generate revenue?)
We’re providing review assistance and some types of automated replication to publishers for a yearly rate, and planning to sell subscriptions to individual researchers for $50 /mo.
Google Retraction Watch. Academia has good people already doing this.
https://www.crossref.org/blog/news-crossref-and-retraction-watch/
Legend right here.
Ah… welp, tis the AI era, I guess…
It’s OK, nobody will be able to read it anyway because it’s on Elsevier.
In Elsevier’s defense, reading is hard and they have so much money to count.
Radiology Case Reports seems to be a low quality journal. https://www.scimagojr.com/journalrank.php?category=2741&page=5&total_size=335
To me, this is a major ethical issue. If any actual humans submitted this “paper”, they should be severely disciplined by their ethics board.
But the publisher who published it should be liable too. Wtf is their job then? Parasiting off of public funded research?
Research journals are often rated for the quality of the content they publish. My guess is that this “journal” is just shit. If you’re a student or researcher, you will come across shit like this and you should be smart enough to tell when something is poor quality.
Yes
Bitfucker knew that was rhetorical.
How was that obvious?
I can’t tell if this question is rhetorical.
A spanking!
Guys it’s simple they just need to automate AI to read these papers for them to catch if AI language was used. They can automate the entire peer review process /s
Dude. Couldn’t even proofread the easy way out they took
This almost makes me think they’re trying to fully automate their publishing process. So, no editor in that case.
Editors are expensive.
If they really want to do it, they can just run a local language model trained to proofread stuff like this. Would be way better
This is exactly the line of thinking that lead to papers like this being generated.
I don’t think so. They are using AI from a 3rd party. If they train their own specialized version, things will be better.
That’s not necessarily true. General-purpose 3rd party models (chatgpt, llama3-70b, etc) perform surprisingly good in very specific tasks. While training or finetuning your specialized model should indeed give you better results, the crazy amount of computational resources and specialized manpower needed to accomplish it makes it unfeasible and unpractical in many applications. If you can get away with an occational “as an AI model…”, you are better off using existing models.
Here is a better idea: have some academic integrity and actually do the work instead of using incompetent machine learning to flood the industry with inaccurate trash papers whose only real impact is getting in the way of real research.
There is nothing wrong with using AI to proofread a paper. It’s just a grammar checker but better.
Proofreading involves more than just checking grammar, and AIs aren’t perfect. I would never put my name on something to get published publicly like this without reading it through at least once myself.
You can literally use tools to check grammar perfectly without using AI. What the LLM AI does is it predict what word comes next in a sequence, and if the AI is wrong as it often is then you’ve just attempted to publish a paper with halucinations wasting the time and effort of so many people because you’re greedy and lazy.
This is what baffles me about these papers. Assuming the authors are actually real people, these AI-generated mistakes in publications should be pretty easy to catch and edit.
It does make you wonder how many people are successfully putting AI-generated garbage out there if they’re careful enough to remove obviously AI-generated sentences.
I’ve heard the word “delve” has suddenly become a lot more popular in some fields
I definitely utilize AI to assist me in writing papers/essays, but never to just write the whole thing.
Mainly use it for structuring or rewording sections to flow better or sound more professional, and always go back to proofread and ensure that any information stays correct.
Basically, I provide any data/research and get a rough layout down, and then use AI to speed up the refining process.
EDIT: I should note that I am not writing scientific papers using this method, and doing so is probably a bad idea.
There’s perfectly ethical ways to use it, even for papers, as your example fits. It’s been a great help for my adhd ass to get some structure in my writing.
https://www.oneusefulthing.org/p/my-class-required-ai-heres-what-ive
Yeah, same. I’m good at getting my info together and putting my main points down, but structuring everything in a way that flows well just isn’t my strong suit, and I struggle to sit there for long periods of time writing something I could just explain in a few short points, especially if there’s an expectation for a certain length.
AI tools help me to get all that done whilst still keeping any core information my own.