The human driving AI in ophthalmology

February 9, 2025 Drew Jones

Professor of artificial medical intelligence at University College London (UCL), ophthalmologist Dr Pearse Keane sat down with Drew Jones to discuss how he ended up sharing a million OCT scans with Google’s DeepMind, how he sees AI’s future in ophthalmology and why he gave away years of his work for free.

 

Born and raised in Dublin, Professor Pearse Keane graduated from medical school in University College Dublin in 2002 before completing his initial ophthalmology training in Cork and Waterford in Ireland’s south. “By then I knew I wanted to become a clinical academic and I was most interested in ophthalmic imaging. OCT was just starting to become the new dominant theme and it captured my imagination,” he says.

 

Prof Keane resolved to go to the States to do research, landing a place at Los Angeles’ Doheny Eye Institute, now part of the University of California. There he worked with some of the world leaders in OCT, including some of the inventors of the technology. However, pursuing an academic career in the US would mean getting into ophthalmology residency training, which Prof Keane says is very difficult for a foreign graduate. So a chance encounter with fellow countryman Dr Tom Flynn at the 2008 Association for Research in Vision and Ophthalmology conference in Florida proved pivotal. “Tom was working in Moorfields and doing a PhD at UCL. He told me about these new clinical academic training roles in the UK, whereby I could come and finish my training in Moorfields but have 50% of my time for research and 50% clinical. It was like a ‘sliding doors’ moment for me.”

 

Today, having been a consultant retina specialist at Moorfields for over a decade, Prof Keane’s research is funded by the UK government, so that balance is closer to 80% research and 20% clinical, he says. “When I came back from the US with a lot of training in interpreting OCTs, it was a very nice clinical niche for me. Even people more senior than me would ask me to interpret their scans.” But he started to read about the advances being made in artificial intelligence (AI), particularly for image recognition and speech translation, so set out to look for collaborators to develop an AI system in the university setting. He was met with shoulder shrugs.

 

 

 

Professor Pearse Keane 

 

“At that time, all of the AI expertise was being snapped up by Facebook, Google and Microsoft. If I sent a message to Bill Gates, it wouldn't have a good chance of getting answered, so who could I contact? Then I read a profile of (British-American AI research laboratory) Google DeepMind in Wired magazine in July 2015 and so many things resonated with me.”

 

It turned out the mother of Mustafa Suleyman, one of DeepMind’s co-founders, was a nurse in the UK’s National Health Service (NHS) and two of the three DeepMind co-founders were UCL alumni, so Prof Keane bought a one-month premium LinkedIn subscription to send Suleyman a message. “I told him we’re doing 1,000 OCT scans per day at Moorfields and asked if we could work together to develop an AI system to identify people with sight-threatening macular disease. To my joy, he answered and a few days later I was in his office talking to him.”

 

At that point Google DeepMind hadn't done anything in health, says Prof Keane, although there had been other people looking at applying AI to retinal photographs. “The collaboration was driven by the fact that OCT happens to be the area I was interested in and which became the dominant modality. So it was just good timing because when the collaboration with Google DeepMind became public, they started getting 200 messages a week from doctors all around the world saying things like, ‘I'm a cardiologist, can we work on AI together?’.” Had he made his pitch a bit later, there would have been no chance he’d have been chosen to do it, he says.

 

Proof of concept

In 2016, Prof Keane led the collaboration, which involved sharing one million retinal images from Moorfields Eye Hospital with Google DeepMind. Two years later they published a paper in Nature Medicine showing their AI system could assess OCTs for macular disease including AMD and diabetic retinopathy (DR) with a performance on par with the world’s leading ophthalmologists. “But now, six years later, the algorithm we developed is still not in routine use – we're still trying to translate that work so you can go into a hospital or an optometrist’s and have it assess your OCT.”

 

Led by this work, Prof Keane developed his own independent AI research group, eventually producing the RETFound (RETinal Foundation) model, one of the first foundation models in any medical specialty. Published in Nature in 2023, its development now underpins a whole programme of oculomics research, he says. “It's essentially a model we feel is the cornerstone for other people around the world to build their models on. We’re proud we made it available open source and we're already seeing people fine tuning it for use in China and Brazil.”

 

Closer to home, Prof Keane’s UCL group algorithm is about to make a practical impact. In October 2024, the Lions Eye Institute in Australia announced Lions Outback Vision had been awarded AU$5 million funding from the Western Australian Government’s Pilbara healthcare initiative, The Challenge. This will support the deployment of Australia’s first mobile retinal camera with fully integrated AI. The camera will screen and provide early detection of eye diseases such as DR in the remote Pilbara region, primarily populated by Indigenous Australians.

 

Obstacles to adoption

 

When you publish a proof of concept in a good journal, what you've got is essentially an experimental code and a dataset, which you've shown works in some way – an ‘in silico’ validation, explains Prof Keane. “But to use that in direct clinical care, it has to become a medical device, which means the experimental code has to be rewritten in a more robust way, where you know every line of code is reliable and documented.” That has to be supported by clinical evidence, which requires studies in different regions, all of which feeds into regulatory clearance with the FDA in the US or the CE mark in Europe, he says.

 

There are currently about 500 medical AI systems making their way through the FDA-approval system, says Prof Keane. “That takes a lot of technical and regulatory expertise, years and thousands of pages of paperwork and a lot of money. Then you're going to need a company to distribute and market the device, all of which is going to be hard if you're an NHS hospital where your bread and butter is caring for patients, not providing customer support for a piece of software being used in Sheffield!” There are also questions of legal liability: “What happens if someone goes blind in Brazil and they want to sue somebody? You can't have that in the hospital.”

 

This leads to the question of whether to licence the technology to an established ophthalmic imaging company or to an electronic health record company, or to set up your own company and commercialise it yourself, he says. “And who's going to pay for it; what's your business model – do you charge a click fee every time it's used or a subscription? Do you give it away for free? The NHS is not going to pay for it unless you have pretty strong evidence that it leads to better health outcomes or economic benefits. So everyone is trying to figure out the business model for these technologies.”

 

It also raises the issue of making the AI hardware or software device agnostic. “At a minimum, any big company taking on proprietary software is going to want to focus on their own machine first and potentially do the others further down the road. Even if they do, there are regulatory nuances: if you claim your software will run on any OCT machine, the regulators will want evidence for every existing OCT machine. And the truth of the matter is these AI systems are quite brittle and won't necessarily work on different systems.”

 

So, as it stands, there are a limited number of AI systems in eyecare actually available for clinical use, most of which are for DR screening of retinal photographs, says Prof Keane. “Some systems can look at OCT scans and make fluid measurements to help guide ophthalmologists giving intravitreal injections. But imagine you develop blurred vision in your left eye and when your optometrist does an OCT scan, the AI tells them you've got something like wet AMD. That’s what our 2018 paper was aimed at – trying to highlight those people with the most sight-threatening macular disease.”

 

The prospect of AI screening huge numbers of patients to free-up eyecare practitioners’ time is what tends to dominate the headlines. But there are vital, albeit less sexy, potential roles for AI in eyecare, he says. “Glaucoma specialists are interested in the idea that AI can make a better prediction of which patients will progress badly and which will be okay. And if you're an optometrist you might want to know which patients, based on their retinal photograph, should be getting a visual field test done.”

 

For both patients and practitioners, the large language models like ChatGPT are also very interesting, he says. “I would strongly advise against anybody doing this, but you could type into ChatGPT: ‘My patient is a 46-year-old white Irishman from Dublin. He has sudden loss of vision in his left eye for three days. It's not painful. There are retinal haemorrhages. What is the differential diagnosis?’ But I bet you, if you did that, it would give you a very good differential diagnosis!”

 

Another possible application is for patients who receive a letter full of technical jargon from their ophthalmologist; they could upload the text and ask the large language model to summarise it in, say, 50 words for someone with a reading age of 14, explains Prof Keane. “Or imagine a consultation with your optometrist or ophthalmologist, or even your bank manager. If there's a recording of the conversation, the AI can summarise it and you can ask it to explain something that has been said. A future model might explain, say, a cataract surgery procedure, using an interactive element. The patient could ask about the risks or how soon they can go swimming post-surgery.”

 

Limitations and misconceptions

 

People need to understand AI is not magic, Prof Keane says, it’s just advanced statistical models that are strong in image classification or speech recognition. “Neither does it have initiative; for the foreseeable future it won’t be able to spot patterns other than the ones it’s been taught to recognise. People are used to using Amazon or Spotify, where listening to certain music will generate suggestions of similar artists, which can change depending on what you listen to next. But once medical AI models are trained, they're locked – they don't learn ‘on the job’.”

 

Although the ideal AI model is one global foundation model, it might turn out that an AI tool developed for, say, White Irish people might not work well for Indigenous Australians, he says. “It might turn out that it’s better just to have an Irish model or a New Zealand model – or even a Dublin and an Auckland model – but we don't know the answer to that yet.”

 

However, growing the already immense dataset of OCT images might provide AI’s most exciting and unpredictable possibilities, Prof Keane says. “RETfound was trained on 1.6 million images and the evidence suggests that if we can get to 10 million images, we’ll start to see emergent capabilities – things you didn't quite expect, that make it more powerful than you imagined. That's what we're interested in exploring at the moment.”