Software developed using machine learning can be used to predict someone’s risk of heart disease in less than a minute by analyzing the veins and arteries in their eye.
The new research, published in the British Journal of Ophthalmology, paves the way for the development of quick and cheap cardiovascular screenings, if the findings are validated in future clinical trials. These screenings would let individuals know their risk of stroke and heart attack without the need for blood tests or even blood pressure measurements.
“This AI tool could let someone know in 60 seconds or less their level of risk,” the lead author of the study, Alicja Rudnicka, told The Guardian. The study found that the predictions were as accurate as those produced by current tests.
The software works by analyzing the web of blood vessels contained within the retina of the eye. It measures the total area covered by these arteries and veins, as well as their width and “tortuosity” (how bendy they are). All these factors are affected by an individual’s heart health, allowing the software to make predictions about a subject’s risk from heart disease just by looking at a non-invasive snapshot of their eye.
“The study adds to a growing body of knowledge that the eye can be used as a window to the rest of the body,” Pearse Keane, a researcher in ophthalmology and AI analysis not connected to the study, told The Verge. “Doctors have known for more than a hundred years that you could look in the eye and see signs of diabetes and high blood pressure. But the problem was manual assessment: the manual delineation of the vessels by human experts.” The use of machine learning, says Keane, can overcome this challenge.
Using AI to diagnose disease from eye scans has proven to be one of the fastest-developing fields of machine learning medicine. The first ever AI diagnostic device approved by the FDA was used to screen for eye disease, and research suggests AI can detect a range of ailments in this way, from diabetic retinopathy to Alzheimer’s (Keane’s own area of research). Tools applying these findings are in various stages of development, but questions do remain about the reliability and universality of their diagnoses.
This recent study, carried out by a team from St George’s, University of London, was only tested on the eye scans of white patients, for example. The team sourced their test data from the UK Biobank, a database that happens to be 94.6 percent white (reflecting the UK’s own demographics in age range of patients included in the BioBank). Such biases would have to be balanced in the future to ensure any diagnostic tool is equally accurate for different ethnicities.
The researchers compared the results from their software, named QUARTZ (an inventive acronym derived from the phrase “QUantitative Analysis of Retinal vessels Topology and siZe”) with 10-year risk predictions produced by the standard Framingham Risk Score test (FRS). They found the two methods had “comparable performance.”
The big challenge, says Keane, is taking this sort of work from “code to clinic.” Who can turn this sort of research into a diagnostic tool, he asks; would it be the UK’s National Health Service (NHS) or a company spun-out from the university? And what level of performance will regulators require before they approve the software’s use? “At what point do we say ‘let’s put a fork in it, we’re done,’ and make it into a commercial product?”