The Biomedical Imaging and AI cluster is focused on increasing engagement and collaboration between our members. As part of this effort, we will be hosting monthly research exchange lunches.
All are welcome to attend, but the focus will be on student and trainee engagement. Participants will be asked to present their research or a research question in a lightning round format (3-5 minutes). Our recurring webinar series will encourage knowledge exchange among our cluster and support research collaboration.
If you are interested in presenting or attending, please sign up!
When: Last Wednesday of the Month, 12pm-1pm
June 23, 2021 | 12pm - 1pm (PST)
This month's theme of Digital Pathology features members from the Robotics Control Laboratory (RCL) and the AI in Medicine (AIM) group.
Yanan Shao, PhD Candidate - supervised by Prof. Tim Salcudean
Title: Improving Prostate Cancer Classification in H&E Tissue Micro Arrays Using Ki67 and P63 Histopathology
Histopathology of Hematoxylin and Eosin (H&E)-stained tissue obtained from biopsy is commonly used in prostate cancer (PCa) diagnosis. Automatic PCa classification of digitized H&E slides has been developed before, but no attempts have been made to classify PCa using additional tissue stains registered to H&E. In this paper, we demonstrate that using H&E, Ki67 and p63- stained (3-stain) tissue improves PCa classification relative to H&E alone. We also show that we can infer the PCa-relevant Ki67 and p63 information from the H&E slides alone and that we can use it to achieve H&E-based PCa classification that is comparable to the 3-stain classification. Reported improvements are both in classifying benign vs. malignant tissue, and low grade (Gleason group 2) vs. high grade (Gleason groups 3,4,5) cancer. Specifically, we conducted four classification tasks using 333 tissue samples extracted from 231 radical prostatectomy patients: regression tree-based classification using either (i) 3-stain features, with a benign vs malignant area under the curve (AUC=92.9%), or (ii) real H&E features and H&E features learned from Ki67 and p63 stains (AUC=92.4%), as well as deep learning classification using either (iii) real 3-stain tissue patches (AUC=94.3%) and (iv) real H&E patches and generated Ki67 and p63 patches (AUC=93.0%) using a deep convolutional generative adversarial network. Classification performance was assessed with Monte Carlo cross validation and quantified in terms of the Area Under the Curve, Brier score, sensitivity, and specificity. Our results are interpretable and indicate that the standard H&E classification could be improved by mimicking other stain types.
Jeffrey Boschman, MASc student - supervised by Prof. Ali Bashashati
Title: Improving Deep Learning Models for Clinical Epithelial Ovarian Carcinoma Whole Slide Pathology Image Classification using Color Normalization
In order to provide the right treatment to a patient, a doctor needs to first diagnose the disease correctly. Although there are distinct subtypes of ovarian cancer, each with its own origin, features, aggressiveness, and treatment plans, many pathologists do not have the intensive training required to diagnose the subtype properly. This lack of specialists is part of the reason why ovarian cancer is the deadliest cancer of the female reproductive system in North America. Deep learning-based diagnostic models could supplement the pathologist laboratory, but the color variation of hematoxylin and eosin (H&E)-stained tissues has presented a challenge for applications of artificial intelligence (AI) in digital pathology. In this study, we systematically investigate eight color normalization algorithms for AI-based classification of H&E-stained histopathology slides, in the context of both using images from one center and from multiple centers. Our results show that color normalization does not consistently improve classification performance when both training and testing data are from a single center. However, using four multi-center datasets of two cancer types (ovarian and pleural) and objective functions, we show that color normalization can significantly improve the classification accuracy of images from external datasets (ovarian cancer: 0.25 AUC increase, p = 1.6 e-05, pleural cancer: 0.21 AUC increase, p = 1.4 e-10). Furthermore, we introduce a novel augmentation strategy by mixing color-normalized images using three easily accessible algorithms that consistently improves the diagnosis of test images from external centers, even when the individual normalization methods had varied results. We anticipate our study to be a starting point for reliable use of color normalization to improve AI-based, digital pathology empowered diagnosis of cancers sourced from multiple centers, including improving an ovarian cancer subtype diagnosis model to achieve the performance on par with specialist pathologists.
January 27, 2021 - Multi-Scale Design Laboratory
This month features members of the Multi-Scale Design Laboratory under the supervision of Dr. Hongshen Ma in the Department of Mechanical Engineering. They are also part of the larger Centre for Blood Research lab group in the Faculty of Medicine.
Erik Lamoureux (MASc student) - Image Classification of Red Blood Cell Deformability Using Deep Learning
Samuel Berryman (PhD student) - Image Based Cell Phenotyping Using Deep Learning
Alec Xu (MASc student) - Macrophage Phenotyping Using Machine Learning
Ryan Lee (Research Assistant) - Deep Learning Based Automated Sperm Identification for Non-obstructive Azoospermia Patients
February 26, 2020
Speaker: Ben Cardoen
Supervisor: Ghassan Hamarneh
Title: Unbiased adaptive compressed signal density estimation in astigmatic dSTORM
Speaker: Jesse Chao
Supervisor: Loewen/ Roskelley
Title: automate phenotype scoring with deep learning
Speaker: Rohan Abraham
Supervisor: Calum MacAulay
Title: Machine and deep learning approaches for classification of sub-cm lung nodules
April 29, 2020
Speaker: Adrian Tanskanen
Supervisor: Pierre Lane
Title: The impact of index dips in double clad fibers and endoscopic optical coherence tomography
Speaker: Jeanie Malone
Supervisor: Pierre Lane
Title: Endoscopic optical coherence tomography (OCT) and autofluorescence imaging (AFI) of ex vivo fallopian tubes
May 27, 2020
Speaker: Delaram, Behnami
Supervisor: Purang Abolmaesumi
Title: AI-driven Echo for Cardiac Diagnosis and Intervention
Control of symmetry-breaking in mammalian developmental models
Daniel Aguilar-Hidalgo, PhD, Research Associate, School of Biomedical Engineering and Michael Smith Laboratories, UBC
Joel Östblom, PhD, Post-Doctoral Teaching and Learning Fellow, Master of Data Science, UBC
During embryonic development, cells divide and differentiate over space and time as instructed by their environment to create complex, functionally diverse tissues. At early stages, embryos undergo a symmetry-breaking event that results in the formation of the anterior to posterior body axis. The mechanism for this breaking of symmetry is still poorly understood. Recent studies have started to reveal how to instruct pluripotent stem cell populations to undergo similar developmental-like organizational events outside the body. Here, we use a combination of experimental, theoretical and computational work to show, for the first time in micropatterend stem cell colonies, symmetry-breaking events in the cell fate organization. Our results show that system size, both in terms of colony geometry and cell number at the time of differentiation, is critical for polarized cell fate organization, suggesting the existence of developmentally relevant system sizes. These insights on how to control and quantify fate organization in cell populations can advance both our understanding of developmental processes and how to create complex tissues with regenerative engineering.
October 28, 2020
Special Guest: Research Staff from STTARR – Innovation Centre for Advanced Preclinical Imaging & Radiation Research
Speaker: Dr. Trevor McKee, Image Analysis Manager, STTARR – Innovation Centre for Advanced Preclinical Imaging & Radiation Research
Title: From pixels to cells: Application of computer vision and machine learning methods to perform quantitative analysis of multi-modality biomedical imaging data, an overview of the STTARR Core Facility’s analysis workflow
The combination of medical imaging and pathological assessment of biopsy specimens plays a key role in the detection, diagnosis, and monitoring of treatment of diseases, including cancer. In particular, new promising cancer therapies such as immunotherapy hold promise for achieving robust clinical responses, but also demand improvements in imaging methods to be able to assess immune activity within tumors. Currently, many existing multi-modality image analysis methods rely heavily on manual annotations for the extraction of quantitative readouts. This limits both the analytical throughput and the complexity to which one can analyze the data – for every hour of imaging time, several additional hours of tedious manual labor are require to extract even rudimentary metrics. Recent advances in computer vision-based techniques, including the use of machine and deep learning methods, have shown tremendous promise for solving previously intractable image segmentation and classification problems.
This presents an opportunity for the development of image analysis pipelines that utilize these technical advances to solve known biomedical imaging challenges. We have developed a series of image analysis pipelines for multiplex image segmentation that draws from several interdisciplinary collaborations between biological scientists, pathologists, and biomedical image analysis specialists. For example, we have recently utilized biomedical domain knowledge to develop a customized cellular segmentation methodology for Imaging Mass Cytometry, to identify invading immune cells of various biomarker-identified types, and measure their density and proximity to blood vessels (a marker of active invasion) in multiple sclerosis lesions in the brain. A robust image segmentation and classification pipeline permits us to move these complex datasets from the format of “pixels embedded in spatial coordinates” towards the format of a “single-cell proteomic” dataset, that also permits spatial relationships between markers or tissue regions to be queried.
Likewise, we have illustrated a number of methods for single-cell and per-vessel analysis of hypoxia and proliferation gradients within solid tumor tissue sections. Development of single-cell “tissue cytometric” methods permits the in-depth study of spatial relationships that would be difficult or impossible to quantify with more rudimentary whole-tissue analysis approaches. The spatial relationships between tissue components are tightly integrated with tissue metabolism and biomedical transport phenomena, so the study of these relationships permits us to better model tissue physiology in silico, through more accurate measurement of relevant physiological parameters.
This improved tumor pathophysiological understanding can permit the development of new therapeutic approaches, such as the use of metformin to improve oxygenation during radiotherapy, a finding initially established in our preclinical imaging facility that is currently being investigated in clinical trials. And finally, clinical validation and deployment of analytical methodologies developed in the laboratory, along with robust methods for quality control and validation of the analytical outputs, allows us to move these promising research tools towards the ultimate goal of clinically approved diagnostic algorithms and medical devices. Such efforts hold the promise to deliver a profound positive impact on the healthcare system, reducing tedious manual steps like counting cells or reviewing scans, through an optimal combination of the advantages of validated automated methods with clinical wisdom and experience.
About the Speaker
Trevor McKee received his Ph.D. in Biological Engineering from the Massachusetts Institute of Technology, in the laboratory of Dr. Rakesh Jain, where he focused on developing methods to overcome barriers to drug and gene therapeutic delivery in tumors, and pioneering the application of multiphoton imaging methods to preclinical cancer models. Dr. McKee continued his training as a postdoctoral fellow at the Ontario Cancer Institute in the laboratory of Dr. Rama Khokha, utilizing multi-modality imaging techniques to study animal models of heart and liver disease and cancer. Dr. McKee is currently developing a preclinical imaging pipeline for cancer phenotyping and drug development at the STTARR Innovation Centre. His work at STTARR includes collaborations with Pfizer Oncology and Molecular Insights Pharmaceuticals through industry partnered research programs to test novel drug and molecular imaging agents.