AI authors of cancer papers unmasked (sort of)

4 minute read


It’s happening, it’s just very hard to tell how much.


The Back Page can’t be the only one to think that while it was all a bit of fun in 2023, the whole AI thing has gone far enough and is starting to get a bit scary, in a rather real and humanity-threatening way.  

While we’ve all enjoyed having programs write ballads about our cats and draw mangled human hands, it’s becoming a common lament in the creative industries that robots and algorithms were supposed to take the boring, menial jobs, not spend their time painting and writing poetry.  

The news that News Corp and other media outlets have happily signed over their content as training material to OpenAI, maker of large language model ChatGPT, feels ominous (even if the company was hitherto vacuuming it up for free).  

As for news generation, research from the University of Canberra has found people are largely uncomfortable with bot-produced copy (though they don’t care as much with sport, or even arts and culture, as with political coverage).  

In medicine, it’s no longer surprising to anyone that an AI can beat human radiologists at detecting prostate cancer, but how about written content-generation?  

Synthetic scribes are already making life easier in the consult room, but you have to watch for AI’s tendency to hallucinate, i.e. just make sh*t up when it doesn’t know the answer.     

Another place you don’t want a machine making sh*t up is in academic papers.  

To find out how much AI is being used in oncology papers, and what the quality of AI detection is like, researchers from the American Society of Clinical Oncology (ASCO) and the University of Chicago took the more than 15,500 abstracts presented at ASCO clinical meetings from 2021 to 2023 and fed them to three general-purpose AI language detectors, GPTZero (v2), Originality.ai (AI Scan) and Sapling. 

They used 100 abstracts from 2018-19 (ah, the halcyon pre-ChatGPT days) as true-negative controls, and created 200 mixed abstracts by sewing AI-written background sections on to methods, results, and conclusions sections penned by humans (a kind of Piltdown Paper).  

The three detectors produced very heterogeneous results, but all showed a big proportional increase in AI-generated or partly AI-generated abstracts over the three years, from a doubling to a tripling.  

One of the detectors, Originality.ai, found a majority of abstracts were of mixed authorship (4015 in 2023), which the authors don’t entirely credit.  

Abstracts for online-only presentation had higher odds of being AI-authored, while abstracts with a clinical trial ID had lower odds.  

Using AI in research writing throws up ethical concerns around plagiarism and author attribution, the authors note, and may “generate fictitious/hallucinated content – particularly citations to nonexistent articles”. 

But while saying it should be regulated, they don’t come down too hard on it, saying AIs have the potential to help with editing and improve writing quality.  

And on the use of detectors, they warn that “care must be taken to ensure AI detection does not perpetuate disparities in scientific literature publication, recognizing that AI detection does not imply poor science or fraudulent findings, and false positives are not uncommon”. 

Ultimately the big limitation, as they say, “is the lack of a ground truth for AI content detection”. For all the careful futzing with testing and detection thresholds to strengthen their results, there is no gold-standard with which to verify that positives and negatives are true.  

If only the hand of AI was as easy to spot in medical writing as it is in illustrations – we reckon even a radiologist could see the problems here (hi Mum, if you’re reading this). 

Send AI-generated story tips and groove metal songs about your schnauzer to penny@medicalrepublic.com.au

End of content

No more pages to load

Log In Register ×