Analytics
Artificial intelligence makes strides as an intelligent assistant
October 30, 2019
Canadian researchers are digging deep to help solve an age-old problem: diagnostic error. By applying recent advances in artificial intelligence (AI) – including machine learning, deep networks and cutting-edge computer vision algorithms – they are working to automate the process of arriving at medical consensus, training machines to recognize thousands upon thousands of digital images so they can quickly and accurately serve as intelligent assistants.
“I would never have dreamed of saying this three or four years ago, but we have the historic opportunity to eliminate diagnostic error,” said Hamid Tizhoosh, director of the University of Waterloo Knowledge Inference in Medical Image Analysis Lab (KIMIA Lab) in Waterloo, Ontario.
Established in 2013, the lab is at the forefront of medical image search. The goal is to extract information that not only supports speedy and accurate diagnosis of disease, but also establishes “new quality assurance based on the mining of collective knowledge and wisdom.”
Whereas other AI solutions in healthcare are pursuing prediction, segmentation, visualization or classification, the KIMIA Lab team is focused solely on using AI for image search. As Tizhoosh explained, the process involves taking current diagnostic images and searching for similar past cases in large databases of digital images.
“What doctors do, is they try to get a second opinion, which is not easy, which is expensive, which is time-consuming, and not every hospital can do it for every case,” said Tizhoosh. “What we want to do with image search is to do provide second opinions computationally, virtually; the pathologist shows us an image and we go in and find the top five, 10 or 20 similar cases and bring them back with corresponding metadata, such as radiology reports.”
The main rationale behind the work is that in medicine, nothing can be better than the collective wisdom, he added. If the majority of images matched by the computer suggest a biopsy is indicating adenocarcinoma, for example, a treating physician can be confident it is.
They can also access historical reports for each case that is matched, using the information to validate their own knowledge and arrive at a confident decision about diagnosis and treatment. “And the beauty here is that AI is just an assistant,” stated Tizhoosh.
Funded in part by an Ontario Research Fund: Research Excellence grant, a federal innovation fund grant and several public and private sponsors, the 20-member lab is intent on achieving computational consensus for histopathology. The overarching vision is to create a machine for biopsy samples that works in a similar fashion to current blood analysis machines: a sample goes in and after a minute, a full diagnostic report comes out.
For now, research goals are more humble, said Tizhoosh. After purchasing the necessary equipment to download and store 30,000 publicly available biopsy samples from the U.S. National Cancer Institute (equivalent to more than 20 million ordinary images), KIMIA researchers and their ORF partner Huron Digital Pathology recently completed a major validation report establishing that if enough images per disease category are available – roughly 5,000 patients – they can achieve close to 100 percent accuracy in image search.
Access to publicly available data sets or bio repositories is key to advancing their work. In addition to cancer, the lab’s findings can be applied to more than 6,000 diseases and multiple diagnostic imaging modalities, including X-ray, MRI, CT-scan and ultrasound.
Earlier this year, the KIMIA Lab partnered with University Health Network (UHN) and Toronto-based Vector Institute, an independent, not-for-profit research institute focused on leading-edge machine learning, to use AI to enhance radiology diagnoses.
The second project to be announced as a Vector Institute Pathfinder Project – one that can demonstrate the positive impact of a specific health AI application within 12 to 18 months – the UHN project is aimed at adding an AI-enabled detection feature to Coral Review, a quality and educational tool created by UHN for performing peer review and peer learning for imaging-based medical departments such as Radiology, Cardiology, and Pathology.
Coral Review is currently in use at 20 hospitals in Ontario and is also the chosen peer review software for the Hospital Diagnostic Imaging Repository Services (HDIRS), an independent, not-for-profit corporation responsible for operating two of the province’s four diagnostic imaging repositories.
Hospitals and imaging centres that connect to HDIRS can use the peer review service. In its current form, the software facilitates the retrospective, random, anonymous and structured review of cases.
Since the current process is manual, sites typically configure the scope of their reviews to between 2-4 percent of all cases. “By applying AI technologies, our goal is to make it possible to apply a quality assurance process for all cases”, said Leon Goonaratne, Senior Director at UHN Digital.
As a starting point, three KIMIA Lab researchers are working with Leon’s team as well as radiologists from JDMI (Joint Department of Medical Imaging – UHN, Sinai Health System, and Women’s College Hospital) – to detect the occurrence of a pneumothorax (collapsed lung) in chest X-ray images. The condition represents a technical challenge because some cases are difficult to see, meaning doctors can miss small collapses.
“We felt that the detection of pneumothorax was a great place to start for our AI project with the Vector Institute. It allows us to focus on a very specific image type, on an indication or diagnosis that we see frequently, but can also be missed,” said Goonaratne, noting that the intent is to enhance the interpretation of the human eye, since unlike humans, “the computer can detect very subtle differences at the pixel level, very easily.”
UHN will be supplying anonymized medical images and reports required by the researchers. A subset of these images will also be annotated by JDMI radiologists. Once the detection tool is integrated into Coral Review, the JDMI radiologists will play an integral role in helping to further train the computer algorithm by providing feedback.
Working with publicly available data, researchers are already achieving 85 percent accuracy and intend to push it closer to 100 percent once they start to work with the well-curated data sets provided by UHN.
Laurent Moreno, Vector Institute Director of Health AI Applications, called the project ground-breaking. “What is amazing about this project, is that AI enables radiologists and clinicians to receive a second opinion based on thousands of additional diagnoses,” he said. “It will be a game-changer in the field because the technology is at the service of the clinician and could speed up the time a radiologist spends, enabling people to be treated faster. It will be particularly useful in unusual cases, with the potential of confirming diagnostics and showing which treatments are likely to produce the best outcomes.”
The Pathfinder Projects are part of Vector’s overall health strategy. The aim is to support and enable health AI research with a focus on three streams: world class research, widespread application and analysis-ready data. Moreno said organizations at both the provincial and national level are getting better at providing data health assets, and the institute is working to facilitate appropriate access to them.
A second Pathfinder Project is under way at St. Michael’s hospital in Toronto in conjunction with the Li Ka Shing Centre for Healthcare Analytics Research and Training (LKS-CHART). LKS-CHART researchers are working to develop an early warning system for general internal medicine.
The system will use AI to process regular feeds of health data and predict when a patient needs to be transferred to the Intensive Care Unit. Accurately predicting when patients need to be transferred 12 to 24 hours earlier may allow more time for potentially life-saving early-intervention care, decreasing rates of cardiac arrest and mortality.
A third project, in partnership with Public Health Ontario, is using computer vision to automatically identify blacklegged ticks that may carry bacteria causing Lyme Disease. This project has a longer-term goal of developing an app anyone can use to take a photo of a tick with their smartphone in order to obtain a rapid medical assessment of high risk tick bites.
“I think we are reaching the stage where people are getting a better understanding of the potential of machine learning. When they understand the potential can be very positive for the field of healthcare and for patients, in general it’s well received,” said Moreno. “AI should not be viewed as a way to replace humans. Instead, it provides cutting edge tools to the clinicians, to make them beneficiaries of the technology.”
Maryam Sadeghi, CEO and co-founder of Vancouver-based start-up MetaOptima Technology Inc., said implementing AI in real-life clinical settings comes with challenges but agrees there is room today for machines to serve as intelligent assistants. She cautions early adopters to be “fair to the machine.
“Part of being fair to the machine is we all make difficult decisions every day and we make mistakes; actually, in medicine we learn from mistakes,” said Sadeghi, pointing out that human accuracy is not 100 percent. “So many cancers are missed every day in a clinical setting, but if the machine misses one, it’s going to end up the top story on the national news,” she said.
The company she founded with husband, Majid Razmara, is working to develop an intelligent dermatology solution called DermEngine and a mobile dermatoscope called MoleScope. DermEngine is intelligent software for the imaging, documentation and diagnosis of skin conditions, including cancer. MoleScope is a skin magnifier that easily attaches to a smartphone to capture high-quality skin images.
According to MetaOptima, DermEngine should be thought of as a “smart assistant – a silicon-made colleague capable of providing educated insight on a given case based upon the collective knowledge obtained from a large number of diverse clinical cases.” As an imaging and analytics system for capturing and analyzing images of the skin, hair and nails, it helps to manage a busy dermatology office. The AI component is intended as an evidence-based assistant to support clinical decisions.
Recently, DermEngine was found to have the power to classify skin lesion images with a higher accuracy than experienced human professionals. However, initial product implementations aren’t focused on diagnoses. Instead the company is focusing on what Sadeghi calls intelligent e-triage, using the machine as a smart assistant to filter urgent and non-urgent cases so that urgent cases such as melanoma rise to the top.
“You’ll have a human expert who is going to review everything, but this is expediting appointments for patients who need to be seen first,” she explained.
The company has an office in Australia and is about to open in the United States, as well. Here in Canada, MetaOptima is one of several partners working with Change Healthcare to develop a cloud-based Dermatology Point-of-Care Intelligent Network, one of the first cohort of projects to be announced by Canada’s Digital Technology SuperCluster.
The $9.7-million project is not only aimed at expediting urgent cases through e-triage, but will also create algorithms for clinical decision support, using real-life clinical data.
According to the project overview, one in six Canadians will develop skin cancer during their lifetimes at a cost to the healthcare system of more than $500 million. Due to a severe shortage of dermatologists, Canadian wait times can be six months or more, but melanoma can progress in as little as six weeks, with survival rates declining from 98 per cent to 15 per cent if treatment is delayed.
Under the SuperCluster project, B.C. primary care doctors will send dermatology e-referrals to Providence Health Care for e-triage. “So, this is not about diagnostics, it’s not going to read these images and say: ‘This is melanoma, this is not melanoma,’” explained Sadeghi. “It’s going to say, “Dr. X, you may want to look at these 200 cases before these 10,000 cases.’ When the system can save billions of dollars in the workflow, why should we jump to diagnostics?” she added.
In Waterloo, Tizhoosh and his team remain optimistic about the potential for intelligent image search to improve diagnoses. After working on the problem since 1993 with limited results in medical imaging, Tizhoosh is buoyed by recent breakthroughs in AI, including deep networks, and the rapid progress researchers are now making.
“We have been able to close the so-called semantic gap – the difference between what humans find similar and what computers find similar,” he said. “That’s why now, with audacity, I saw we can eliminate diagnostic error. Not today, but one day after we have access to a large enough archive of data to process it.”
His journey to solve the complex problem is a personal one that started following the passing of his beloved grandpa, who died from lung cancer. At the time, Tizhoosh left his engineering career to pursue PhD opportunities related to improving medical imaging as it relates to diagnosis and hasn’t looked back.
“This most likely will be the last big project of my career,” he said of his work with KIMIA Lab to provide computational consensus for the diagnoses of cancer and other diseases. “It’s quite personal when you work on this type of thing. It’s additional motivation to do your best.”