A computer programme can identify breast cancer from routine scans with greater accuracy than human experts, researchers said in what they hoped could prove a breakthrough in the fight against the global killBreast cancer is one of the most common cancers in women, with more than two million new diagnoses last year alone. Regular screening is vital in detecting the earliest signs of the disease in patients who show no obvious symptoms. In Britain, women over 50 are advised to get a mammogram every three years, the results of which are analysed by two independent experts.
But interpreting the scans leaves room for error, and a small percentage of all mammograms either return a false positive — misdiagnosing a healthy patient as having cancer — or false negative — missing the disease as it spreads. Now researchers at Google Health have trained an artificial intelligence model to detect cancer in breast scans from thousands of women in Britain and the United States. The images had already been reviewed by doctors in real life but unlike in a clinical setting, the machine had no patient history to inform its diagnoses.
The team found that their AI model could predict breast cancer from the scans with a similar accuracy level to expert radiographers. Further, the AI showed a reduction in the proportion of cases where cancer was incorrectly identified — 5.7 percent in the US and 1.2 percent in Britain, respectively. It also reduced the percentage of missed diagnoses by 9.4 percent among US patients and by 2.7 percent in Britain.
“The earlier you identify a breast cancer the better it is for the patient,” Dominic King, UK lead at Google Health, told AFP. “We think about this technology in a way that supports and enables an expert, or a patient ultimately, to get the best outcome from whatever diagnostics they’ve had.” – Computer ‘second opinion’ – In Britain all mammograms are reviewed by two radiologists, a necessary but labour-intensive process. The team at Google Health also conducted experiments comparing the computer’s decision with that of the first human scan reader. If the two diagnoses agreed, the case was marked as resolved. Only with discordant outcomes was the machine then asked to compare with the second reader’s decision.
The study by King and his team, published in Nature, showed that using AI to verify the first human expert reviewer’s diagnosis could save up to 88 percent of the workload for the second clinician. “Find me a country where you can find a nurse or doctor that isn’t busy,” said King. “There’s the opportunity for this technology to support the existing excellent service of the (human) reviewers.” Ken Young, a doctor who manages mammogram collection for Cancer Research UK, contributed to the study. He said it was unique for its use of real-life diagnosis scenarios from nearly 30,000 scans.
“We have a sample that is representative of all the women that might come through breast screening,” he said. “It includes easy cases, difficult cases and everything in between.” The team said further research was needed but they hoped that the technology could one day act as a “second opinion” for cancer diagnoses.