blindness

Google’s DeepMind has potentially made the first significant application of Artificial Intelligence (A.I.) in healthcare, after developing AI to diagnose disease by analysing medical images. 

According to the Financial Times, the company has crunched data from thousands of retinal scans to train an A.I. algorithm to detect signs of eye disease more quickly and efficiently than human specialists. 

Findings were submitted to a medical journey after “promising signs” from a two-year partnership with the National Health Service and London’s Moorfields Eye Hospital. The technology could enter clinical trials in a few years if results pass a peer review by academics. 

Dominic King, clinical lead for DeepMind Health, explained, “In specific areas like medical imaging, you can see we’re going to make really tremendous progress in the next couple of years with artificial intelligence. Machine learning could have a very important role picking up things more sensitively and specifically than currently happens.”

The organisation’s algorithm has been trained using anonymised 3D retinal scans provided by Moorfields and labeled for signs of disease by doctors. At present, the company is discussing clinical trials with hospitals including Moorfields. 

As a result of the images being able to provide rich data with millions of pixels of information, the algorithm can learn to analyse them for signs of the three biggest serious eye diseases, including glaucoma, diabetic retinopathy, and age-related macular degeneration. 

Director of research and development at Moorfields, Peng Tee Khaw, said, “I am optimistic that what we learn from this research will benefit people around the world and help put an end to avoidable sight loss. We hope to publish our findings in a peer-reviewed journal within the next year.”

Dr King continued to explain how the artificial intelligence is “generalised,” meaning that it can be applied to other kinds of images. DeepMind has revealed that the next stage would involve training the algorithm to analyse radiotherapy scans, through a partnership with University College London Hospitals and mammograms, in partnership with Imperial College London.

Labelling images for head and neck cancer “is a five or six-hour job, usually doctors sit and do it after work,” explained one consultant labelling scans at Google’s offices. “The artificial intelligence has happened at the same time as the [NHS] is under so much pressure.”

There has been wider discussion around the world amongst hospitals about whether artificial intelligence can reduce the burden of repetitive work. However, the relationship between big technology companies and hospitals is sensitive.

Last year, the UK’s data protection watchdog ruled that an NHS trust broke the law by giving DeepMind access to the medical records of 1.6m patients. This ruling related to a trial of DeepMind’s Streams medical diagnosis app, which does not use A.I. but analyses data and sends nurses and doctors alerts when a patient’s readings look abnormal. Since then, the company has set up a research unit focused on the ethical and social implications of the A.I. that it is creating. 

Dr King concluded, “Artificial Intelligence needs to be implemented and evaluated I would say as rigorously as a new pharmaceutical medical device so you have evidence that then allows you to scale up across a health system.”


Click here to open external link

Google's DeepMind now diagnoses eye disease better than human doctors