Google’s AI cancer-spotting claims under microscope

Published on the 21/10/2020 | Written by Heather Wright


Google AI cancer spotting

Transparency and reproducibility of AI research questioned…

Google’s cancer-spotting AI claims are under the microscope, sparking a call from a number of doctors and scientists from around the globe for more transparency and reproducibility in AI-based research.

In January, an international team including Google Health and Imperial College London claimed Google’s algorithm was more accurate than human radiologists at spotting breast cancer from mammograms. The report garnered international headlines.

“On paper and in theory, the study is beautiful. But if we can’t learn from it then it has little to no scientific value.”

Ten months down the track however, it has sparked a call from scientists at a number of institutes, including Stanford University, Johns Hopkins University, Harvard University School of Public Health, Princess Margaret Cancer Centre and University of Toronto, for sharing of code, models and computational environments when reports are published. The group say claims aren’t being backed up by usable evidence and that journals are vulnerable to the hype of AI.

In an article published in science journal Nature this month, the team argues that the lack of transparency and details of the methods and algorithmic code in the Google research ‘undermines its scientific value’.

(Google Health provided a rebuttal in the same publication, arguing the need to protect patient information and protect the AI from attacks.)

Transparency has long been a sticking point for artificial intelligence with increasing calls for greater transparency into the AI ‘black box’ – the inner workings of the AI models, usually to ensure a lack of discrimination and to increase trust.

In the scientific community, however, the transparency into how models work is a requirement for ensuring results can be scrutinised and replicated in further testing.

And that, says a group of more than 20 doctors and researchers, is where Google’s study – and other’s like it – have fallen down.

“Scientific progress depends on the ability of researchers to scrutinise the results of a study and reproduce the main finding to learn from,” Benjamin Haibe-Kains, senior scientist at Princess Margaret Cancer Centre and first author of the article, says.

“But in computational research, it’s not yet a widespread criterion for the details of an AI study to be fully accessible,” Dr Haibe-Kains says.

“The work by McKinney et al demonstrates the potential of AI in medical imaging, while highlighting the challenges of making such work reproducible,”

The group say the Google Health study lacked sufficient descriptions of methods used including their code and models, making it impossible for researchers to replicate it – a critical component in scientific research, where scientific progress depends on the ability of independent researchers to scrutinise a study and reproduce the main results, building on them in future studies.

While the Google Health breast cancer study might have been the trigger point for the debate, the researchers say the problem goes well beyond that single report. Haibe-Kains calls out the push to publish findings as a ‘problematic pattern’ and says journals are vulnerable to hype of AI and may lower the standards for accepting papers that don’t include all the materials required to make the study reproducible.

The lack of properly described models can slow down the transition of potentially lifesaving AI algorithms into clinical settings, they say.

“The lack of access to code and data in prominent scientific publications may lead to unwarranted and even potentially harmful clinical trials,” the group say in their Nature report, Transparency and reproducibility in artificial intelligence.

It’s not an impossible problem, however, with a number of frameworks and platforms available to make AI research more transparent and reproducible. The group notes that source code can easily be published on sites like GitHub or BitBucket.

Sharing data, particularly patient information, is also a key issue but the group says sharing of raw data is becoming more common in biomedical literature and if the data can’t be shared the model predictions and labels should be released.

“Above all, concerns about data privacy should not be used as a way to distract from the requirement to release code,” they say.

 

Post a comment or question...

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

MORE NEWS:

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
Follow iStart to keep up to date with the latest news and views...
ErrorHere