Evaluation of Deep Learning to Augment Image Guided Radiotherapy for Head and Neck and Prostate Cancers
- Ozan Oktay ,
- Jay Nanavati ,
- Anton Schwaighofer ,
- David Carter ,
- Melissa Bristow ,
- Ryutaro Tanno ,
- Gill Barnett ,
- David Noble ,
- Yvonne Rimmer ,
- Rajesh Jena ,
- Ben Glocker ,
- Kenton O'Hara ,
- Christopher Bishop ,
- Javier Alvarez-Valle ,
- Aditya Nori
JAMA |
IMPORTANCE: Personalized radiotherapy planning depends on high-quality delineation of target tumors and surrounding organs at risk (OARs). This process puts additional time burdens on oncologists and introduces variability among both experts and institutions.
OBJECTIVE: To explore clinically acceptable auto-contouring solutions that can be integrated into existing workflows and used in different domains of radiotherapy.
DESIGN, SETTING, AND PARTICIPANTS: This quality improvement study used a multi-center imaging data set comprising 519 pelvic and 242 head and neck computed tomography (CT) scans from 8 distinct clinical sites and patients diagnosed either with prostate or head and neck cancer. The scans were acquired as part of treatment dose planning from patients who received intensity modulated radiation therapy between October 2013 and February 2020. Fifteen different OARs were manually annotated by expert readers and radiation oncologists. The models were trained on a subset of the data set to automatically delineate OARs and evaluated on both internal and external data sets. Data analysis was conducted October 2019 to September 2020.
MAIN OUTCOMES AND MEASURES: The auto-contouring solution was evaluated on external datasets, and its accuracy was quantified with volumetric agreement and surface distance measures. Models were benchmarked against expert annotations in an interobserver variability (IOV) study. Clinical utility was evaluated by measuring time spent on manual corrections and annotations from scratch.
RESULTS: A total of 519 participants’ (519 [100%] men; 390 [75%] aged 62-75 years) pelvic CT images and 242 participants’ (184 [76%] men; 194 [80%] aged 50-73 years) head and neck CT images were included. The models achieved levels of clinical accuracy within the bounds of expert IOV for 13 of 15 structures (eg, left femur, κ = 0.982; brainstem, κ = 0.806) and performed consistently well across both external and internal data sets (eg, mean [SD] Dice score for left femur, internal vs external data sets: 98.52% [0.50] vs 98.04% [1.02]; P = .04). The correction time of auto-generated contours on 10 head and neck and 10 prostate scans was measured as a mean of 4.98 (95% CI, 4.44-5.52) min/scan and 3.40 (95% CI, 1.60-5.20) min/scan, respectively, to ensure clinically accepted accuracy, whereas contouring from scratch on the same head and neck scans was observed to be 73.25 (95% CI, 68.68-77.82) min/scan for a radiation oncologist and 86.75 (95% CI, 75.21-92.29) min/scan for an expert reader, accounting for a 93% reduction in time.
CONCLUSIONS AND RELEVANCE: In this study, the models achieved levels of clinical accuracy within expert IOV while reducing manual contouring time and performing consistently well across previously unseen heterogeneous data sets. With the availability of open-source libraries and reliable performance, this creates significant opportunities for the transformation of radiation treatment planning.
出版物のダウンロード
InnerEye – Deep Learning
9月 22, 2020
This is a deep learning toolbox to train models on medical images (or more generally, 3D images). It integrates seamlessly with cloud computing in Azure.
Project InnerEye: Augmenting cancer radiotherapy workflows with deep learning and open source
Medical images offer vast opportunities to improve clinical workflows and outcomes. Specifically, in the context of cancer radiotherapy, clinicians need to go through computer tomography (CT) scans and manually segment (contour) anatomical structures. This is an extremely time-consuming task that puts a large burden on care providers. Deep learning (DL) models can help with these segmentation tasks. However, more understanding is needed regarding these models’ clinical utility, generalizability, and safety in existing workflows. Building these models also requires techniques that are not easily accessible to researchers and care providers.
In this webinar, Dr. Ozan Oktay and Dr. Anton Schwaighofer will analyze these challenges within the context of image-guided radiotherapy procedures and will present the latest research outputs of Project InnerEye in tackling these challenges. The first part of the webinar will focus on a research study that evaluates the potential clinical impact of DL models within the context of radiotherapy planning procedures. The discussion will also include the performance analysis of state-of-the-art DL models on datasets from different hospitals and cancer types, and we’ll explore how they compare with manual contours annotated by three clinical experts.
The second part of the talk will introduce the open-source InnerEye Deep Learning Toolkit and how it can provide tools to help enable users to build state-of-the-art medical image segmentation models in Microsoft Azure. There will be examples illustrating step-by-step how the toolkit can be used in different segmentation applications within Azure Machine Learning (Azure ML) infrastructure. This includes model specification, training run analysis, performance reporting, and model comparison.
In this webinar, we will explore:
- The performance of DL segmentation models across images from multiple clinical sites and different radiotherapy domains, and how it compares with levels of inter-expert variability in radiotherapy contouring tasks.
- The potential clinical impact of such models in terms of time savings, by augmenting existing radiotherapy dose planning procedures.
- The features of the InnerEye Deep Learning Toolkit and how it can be used by developers to aid in building their own in-house medical image segmentation models from scratch.
- Exploring the potential benefits of Azure ML cloud integration and model development process in Azure ML.
Resource list:
- InnerEye (Project page)
- InnerEye on Github
- InnerEye JAMA (publication)
- Open-source announcement (MSR Blog)
- Research outcomes (MSR Blog)
- Tech Minutes – Project InnerEye
- Ozan Oktay (Researcher profile)
- Anton Schwaighofer (Researcher profile)
*This on-demand webinar features a previously recorded Q&A session and open captioning.
Explore more Microsoft Research webinars: https://aka.ms/msrwebinars