Relevant Thesis-Based Degree Programs
Affiliations to Research Centres, Institutes & Clusters
Complete these steps before you reach out to a faculty member!
- Familiarize yourself with program requirements. You want to learn as much as possible from the information available to you before you reach out to a faculty member. Be sure to visit the graduate degree program listing and program-specific websites.
- Check whether the program requires you to seek commitment from a supervisor prior to submitting an application. For some programs this is an essential step while others match successful applicants with faculty members within the first year of study. This is either indicated in the program profile under "Admission Information & Requirements" - "Prepare Application" - "Supervision" or on the program website.
- Identify specific faculty members who are conducting research in your specific area of interest.
- Establish that your research interests align with the faculty member’s research interests.
- Read up on the faculty members in the program and the research being conducted in the department.
- Familiarize yourself with their work, read their recent publications and past theses/dissertations that they supervised. Be certain that their research is indeed what you are hoping to study.
- Compose an error-free and grammatically correct email addressed to your specifically targeted faculty member, and remember to use their correct titles.
- Do not send non-specific, mass emails to everyone in the department hoping for a match.
- Address the faculty members by name. Your contact should be genuine rather than generic.
- Include a brief outline of your academic background, why you are interested in working with the faculty member, and what experience you could bring to the department. The supervision enquiry form guides you with targeted questions. Ensure to craft compelling answers to these questions.
- Highlight your achievements and why you are a top student. Faculty members receive dozens of requests from prospective students and you may have less than 30 seconds to pique someone’s interest.
- Demonstrate that you are familiar with their research:
- Convey the specific ways you are a good fit for the program.
- Convey the specific ways the program/lab/faculty member is a good fit for the research you are interested in/already conducting.
- Be enthusiastic, but don’t overdo it.
G+PS regularly provides virtual sessions that focus on admission requirements and procedures and tips how to improve your application.
ADVICE AND INSIGHTS FROM UBC FACULTY ON REACHING OUT TO SUPERVISORS
These videos contain some general advice from faculty across UBC on finding and reaching out to a potential thesis supervisor.
Graduate Student Supervision
Doctoral Student Supervision
Dissertations completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest dissertations.
Network architectures and training strategies are crucial considerations in applying deep learning to neuroimaging data, but attaining optimal performance still remains challenging, because the images involved are high-dimensional and the pathological patterns to be modeled are often subtle. Additional challenges include limited annotations, heterogeneous modalities, and sparsity of certain image types. In this thesis, we have developed detailed methodologies to overcome these challenges for automatic feature extraction from multimodal neuroimaging data to perform image-level classification and segmentation, with applications to multiple sclerosis (MS).We developed our new methods in the context of four MS applications. The first was the development of an unsupervised deep network for MS lesion segmentation that was the first to use image features that were learned completely automatically, using unlabeled data. The deep-learned features were then refined with a supervised classifier, using a much smaller set of annotated images. We assessed the impact of unsupervised learning by observing the segmentation performance when the amount of unlabeled data was varied. Secondly, we developed an unsupervised learning method for modeling joint features from quantitative and anatomical MRIs to detect early MS pathology, which was novel in the use of deep learning to integrate high-dimensional myelin and structural images. Thirdly, we developed a supervised model that extracts brain lesion features that can predict conversion to MS in patients with early isolated symptoms. To efficiently train a convolutional neural network on sparse lesion masks and to reduce the risk of overfitting, we proposed utilizing the Euclidean distance transform for increasing information density, and a combination of downsampling, unsupervised pretraining and regularization during training. The fourth method models multimodal features between brain lesion and diffusion patterns to distinguish between MS and neuromyelitis optica, a neurological disorder similar to MS, to support differential diagnosis. We present a novel hierarchical multimodal fusion architecture that can improve joint learning of heterogeneous imaging modalities. Our results show that these models can discover subtle patterns of MS pathology and provide enhanced classification and prediction performance over the imaging biomarkers previously used in clinical studies, even with relatively small sample sizes.
Deep learning methods have shown great success in many research areas such asobject recognition, speech recognition, and natural language understanding, dueto their ability to automatically learn a hierarchical set of features that istuned to a given domain and robust to large variability. This motivates the useof deep learning for neurological applications, because the large variability inbrain morphology and varying contrasts produced by different MRI scanners makesthe automatic analysis of brain images challenging.However, 3D brain images pose unique challenges due to their complex contentand high dimensionality relative to the typical number of images available,making optimization of deep networks and evaluation of extracted featuresdifficult. In order to facilitate the training on large 3D volumes, we havedeveloped a novel training method for deep networks that is optimizedfor speed and memory. Our method performs training of convolutional deep beliefnetworks and convolutional neural networks in the frequency domain, whichreplaces the time-consuming calculation of convolutions with element-wisemultiplications, while adding only a small number of Fourier transforms.We demonstrate the potential of deep learning for neurological image analysisusing two applications. One is the development of a fully automatic multiplesclerosis (MS) lesion segmentation method based on a new type of convolutionalneural network that consists of two interconnected pathways for featureextraction and lesion prediction. This allows for the automatic learning offeatures at different scales that are optimized for accuracy for any givencombination of image types and segmentation task. Our network also uses a novelobjective function that works well for segmenting underrepresented classes, suchas MS lesions. The other application is the development of a statistical modelof brain images that can automatically discover patterns of variability in brainmorphology and lesion distribution. We propose building such a model using adeep belief network, a layered network whose parameters can be learned fromtraining images. Our results show that this model can automatically discover theclassic patterns of MS pathology, as well as more subtle ones, and that theparameters computed have strong relationships to MS clinical scores.
Master's Student Supervision
Theses completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest theses.
Medical image segmentation is an essential part of a many healthcare services. While it is possible for an expert to manually label each pixel or voxel in an image, it is a time-consuming process that lacks reproducibility and doesn’t scale well — a problem that will only get worse as advancements in medical image acquisition yield greater quantities of images with higher resolution. To address this need, automatic deep learning-based segmentation has become a popular research direction. However, due to their complexity, deep learning solutions have a high level of variability both in terms of design and outcomes.In this thesis we present a workflow that leverages the core principles of the PCS framework, an existing set of data science recommendations, to improve the reproducibility and transparency of deep learning medical image segmentation results. We started with a conventional approach for comparison, evaluating the impact of the network depth, filter sizes, and presence of shortcut connections on a convolutional encoder network. We found shortcuts to have the greatest impact (DSC 0.20), while changes to the filter size (DSC 0.04) and the number of layers (DSC 0.03) also affected performance. We then implemented our proposed workflow, incorporating PCS principles, to evaluate the predictability of a broader range of convolutional encoder network architecture and algorithm variations, as well as characterize the performance of those models under reasonable perturbations. By exploring variations in loss functions, filter size, and hidden unit quantity as model perturbations, we determined a subset of models that met a standard of predictability. All of our selected model perturbations performed comparably well in terms of predictability (DSC [0.79 - 0.80]). Stability analysis of these models demonstrated poor hyper-parameter stability when using the sensitivity-specificity ratio as a loss function, and improved stability and distance-based metric performance for 5 x 5 filters. The additional information provided as a result of using our workflow directly improves upon the recommendations derived from the conventional approach.
Multiple sclerosis (MS) is an autoimmune disease of the central nervous system with a heterogeneous disease course, making it difficult to predict patient-specific clinical outcomes. Machine learning can potentially improve predictions by feature extraction and/or learning complex relationships. Morphological change in deep grey matter (DGM) structures is a consistent feature in all MS phenotypes, yet the value of DGM imaging features for clinical prediction is largely unexplored. In this thesis, I evaluated the contribution of DGM volumes and deep-learned (DL) features for predicting new disease activity within 24 months of a first clinical demyelinating event. Our data set had two challenging characteristics: highly heterogeneous feature types, requiring a thoughtful exploration of feature selection methods, and 32 out of 140 patient samples had uncertain ground truth labels. We implemented and evaluated 1) random forest (RF) models trained on clinical, demographic and DGM volumes, 2) four feature selection methods for RF training and their impact on model performance, 3) DL models trained on 3D segmentations of DGM nuclei with and without user-defined features, and 4) strategies to account for label uncertainty while training an RF. In a 7-fold nested cross-validation experiment, our best result without accounting for uncertainty (F1-score = 77.57%, SD= 6.60%) was achieved with an RF trained on manually selected features, which outperformed common automated feature selection methods, such as iterative RFs (F1-score = 72.35%, SD=6.91%). The neural network using only deep-learned DGM features achieved a slightly lower F1-score = 73.02% (4.70%), which decreased further when adding user-defined features. When accounting for label uncertainty, the highest performance achieved in the 108 confirmed labels was produced by a probabilistic RF (F1-score = 89.62%, SD=4.90%) trained on all available samples, which was higher than training only on the confirmed labels.
Fibrosis is a biological phenomenon characterized by the formation of excessivefibrous connective tissue, which can be visualized using whole-slide image (WSI)scanning techniques. However, the quantification of fibrosis in WSIs is presently asubjective, labour-intensive and time-consuming process. Accurate fibrosis quantifications are of vital importance to pathologists and researchers alike, as highfibrotic content has been linked to a variety of diseases in both animals and humans,and fibrotic prevention or reversal have become major endpoints in clinical trials.We present a novel, fully-automated software tool capable of performing image-to-Image (I2I) translation of picrosirius red (PSR) stained murine WSIs to a translatedcounterpart with all non-fibrotic pixels set to black. A dataset consistingof 32,652 PSR-stained source images paired with their manually translated counterpartwas used to train a conditional conditional generative adverserial network. The source images consist of murine diaphragm, liver, and tibialis anteriorsections, varying in lighting and staining conditions. Based on an extensivearchitecture search, the final machine learning model was identified and namedStacked u-nets with Extended Range Connections Generative Adverserial Network(SERGAN), which once trained on the supervised dataset was capable ofgenerating translations more similar to the ground truth images (obtaining a testset mIOU=0.934) than previous I2I methods, such as Pix2Pix or u-net. After translation,the software tool calculates the collagen proportionate area of the WSI bydividing the number of fibrotic pixels by the total number of tissue pixels foundduring a separate tissue detection step. Fibrosis quantifications of source images were compared with biochemical assay results (quantitative PCR or RNA-Seq) from the same specimen where data was available, and SERGAN quantifications were found to have significant correlationswith several known biomarkers of fibrosis (hydroxyproline concentration[rₛ = 0:52] and periostin gene expression levels [rₛ = 0:24]), whereas the manuallytranslated ground truth quantifications did not. Overall, the final fully-automaticsoftware tool performs multi-organ image segmentation of fibrotic pixels more accuratelythan other known I2I frameworks, and demonstrates potential usage forpreclinical applications.
Secondary progressive MS (SPMS) is a late stage neurological disease characterized by chronic worsening. Enhanced prediction of SPMS progression could improve clinical trial design and may inform patient/physician treatment decisions, but the task is difficult since MS is characterized by heterogeneity in terms of clinical features, genetics, pathogenesis, and treatment response. The Expanded Disability Status Scale (EDSS), is a nominal MS disability scale for describing physical disability that is often incorrectly treated as a continuous variable. Machine learning (ML) models identify relationships between features and outcome, while deep learning (DL) adds on automatic feature extraction from low-level data. Although both have been applied to MS classification and early-stage transition prediction, late-stage MS disability progression prediction is lacking. The contributions of this thesis are the design, implementation, and evaluation of 1) ML using user-defined features (UDF), 2) DL using automatically extracted brain lesion mask features (BLM) for predicting SPMS disability progression, and 3) an evaluation of the impact on performance when EDSS is misused as a continuous variable. SPMS participants (n=485) in a 2-year placebo-controlled (negative) trial of MBP8298 were labelled progressors if a 6-month-sustained increase in EDSS (≥1.0 and ≥0.5 for a baseline of ≤5.5 and ≥6.0 respectively) was observed within 24 months. UDF included EDSS, Multiple Sclerosis Functional Composite component scores, T₂ lesion volume, brain parenchymal fraction, disease duration, age, and sex. Logistic regression (LR), ensemble support vector machines (enSVM), random forest (RF), and AdaBoost decision trees (AdBDT) were trained using UDF only. DL networks were trained to extract BLM features and predict progression with and without UDF. The primary outcome was the area under the receiver operating characteristic curve (AUC). Of the 485 participants, 115 progressed. When using continuous EDSS, AdBDT and RF had a greater AUC (60.3% and 56.2%) than enSVM (52.1%) and LR (44.7%), and DL using only BLM features outperformed LR using UDF (55.0% vs. 45.0%). UDF did not improve DL. RF and AdBDT were robust to EDSS treatment. SPMS trial cohorts selected by ML, DL, or both, could identify those at highest risk for progression, enabling smaller, shorter studies.
The human cerebral cortex is a deep folded structure of grey matter and is responsible for maintaining cognitive functions. The cortical thickness has emerged as an important surrogate biomarker in many neurodegenerative diseases. In this thesis, we propose a longitudinal method called LCT (Longitudinal Cortical Thickness) for measuring cortical thickness changes over time in magnetic resonance scans. We adopt a voxel-based approach rather than surface-based to gain computational efficiency and sensitivity to subtle changes but aim to achieve the scan-rescan reproducibility of surface-based methods. The existing surface-based methods are very time-consuming and the topological and smoothness constraints imposed during surface reconstruction may blunt longitudinal sensitivity. Also, longitudinal processing requires establishing anatomical correspondence across time, and most current methods use deformable registration for this purpose, which requires setting many parameters that can directly affect the measurements. In contrast, LCT establishes cortex-specific matches by using three intuitive features: normal direction, spatial coordinates and shape context that are defined on the cortical skeleton. We also introduce the concept of fuzzy correspondence, which allows a skeletal point in one scan to be partially matched to multiple points in another scan, thereby enhancing the stability of the matches. LCT was evaluated using three longitudinal datasets: 1) same-day pairs of scans of 15 subjects to test for scan-rescan reproducibility, 2) paired scans of 50 Alzheimer’s disease (AD) and 50 healthy subjects taken one-year apart, to test our method’s ability to distinguish between AD and normal, and 3) paired scans of 100 secondary progressive multiple sclerosis (MS) subjects taken two years apart, to test for sensitivity to change over time. Tests on the scan-rescan dataset show that LCT is comparable in scan-rescan reproducibility to two other state-of-the-art methods: FreeSurfer and minimum line integral (MLI). Tests on the AD dataset show that LCT detected larger group differences between AD and normal subjects than FreeSurfer and MLI. Further, the results on the MS dataset demonstrate that LCT is more sensitive to change over time and produced stronger correlations between cortical thickness changes and changes in clinical scores than FreeSurfer, which did not produce any statistically significant correlations.
Spinal cord atrophy is a valuable biomarker in multiple sclerosis (MS) for its significant correlation with physical disability. Measurement of spinal cord atrophy on MRI may be possibly confounded by fluctuations in water content, and the high measurement variance in previous longitudinal studies can be possibly reduced by registration-based methods. In this thesis, we investigated the effect of change in water content due to hydration status on cord cross-sectional area (CSA) measurement, and the applicability of three registration-based methods for longitudinal cord atrophy measurement. Our first hypothesis is that dehydration can decrease the cord CSA measurement on MRI. We found a mean decrease of 0.65% in CSA on scans collected from ten controls following a dehydration protocol using two independent cross-sectional CSA measurement methods. Our result demonstrates that change in water content of the cord is associated with measurable change in cord CSA. The second main hypothesis is that registration-based methods can decrease the variance in longitudinal cord atrophy measurement by using the signal from multiple scans to improve robustness to image noise and artifacts and by regularization of the registration to constrain the degrees of freedom. We implemented three algorithms: boundary shift integral based on rigid registration, Jacobian integration based on deformable registration and scale factor computation based on constrained registration (composed of rigid and scale transformation). We evaluated the three registration-based methods by comparing them to two cross-sectional methods, as applied to three longitudinal data sets: 1) images with simulated cord atrophy; 2) images acquired in the dehydration study described above; and 3) images of 15 MS patients over a two-year interval. Our main result was that while registration-based methods achieved more accurate results on simulation data sets and overall smaller measurement variance, they were not as sensitive, reporting no dehydration effect and smaller magnitude of patient cord atrophy. We argue that the limited spatial resolution of 1mm of MR scans in our experiment is possibly the main reason and future studies of cord atrophy measurement using registration-based methods should be conducted on MR scans with a high spatial resolution such as 0.5mm.
Myelin is an essential component of nerve fibers and monitoring its health is important for studying neurological diseases that attack myelin, such as multiple sclerosis. The amount of water trapped within myelin, which is a surrogate for myelin content and integrity, can be measured in vivo using MRI relaxation techniques that acquire a series of images at multiple echo times to produce a T₂ decay curve at each voxel. These curves are then analyzed, most commonly using non-negative least squares (NNLS) fitting, to produce T₂ distributions from which water measurements are made. T₂ decay analysis using NNLS has two main challenges: instability and high computational demands. The main contributions of this thesis are: 1) we propose a new regularization algorithm in which local and non-local information is gathered and used adaptively for each voxel and 2) we propose a hybrid utilization method of multicore CPUs and GPUs to improve the efficiency of a T₂ decay analysis, with consideration of the increased computational burden of regularization and careful analysis of which algorithmic components would benefit from multicore CPU vs. GPU parallelization. Our results demonstrated that the proposed regularization method provided more globally consistent myelin water measurements, yet preserved fine structures. Our experiment with real patient data suggested that the algorithm improved the reproducibility and the ability to distinguish between the myelin maps of multiple sclerosis patients and healthy subjects. We also demonstrated our optimized implementation’s performance by comparing with a parallelized implementation written in MATLAB which is the most commonly used platform for the analysis of T₂ relaxation data. We found an improvement in speed of over 4× when computing single seven-slice myelin map and over 14× for a batch processing using the same number of CPU cores.
Many previous studies in multiple sclerosis (MS) have focused on the relationship between white matter lesion volume and clinical parameters, but few have investigated the independent contribution of the spatial dispersion of lesions to patient disability.In this thesis, we investigate whether a mathematical measure of the 3D spatial dispersion of lesions can reveal clinical significance that is independent of volume. Our hypothesis is that for any two given patients with similar lesion loads, the one with greater lesion dispersion would tend to have a greater disability. We investigate four different approaches for quantifying lesion dispersion and examine the ability of these lesion dispersion measures to act as potential surrogate markers of disability. We propose one connectedness-based measure (compactness), two region-based measures (ratio of minimum bounding spheres and ratio of lesion convex hull to the brain volume), two distance-based measures (Euclidean distance from a fixed point and pair-wise Euclidean distances) and one measure based on network theory (small-worldness). Our data include three sets of MRIs (n = 24, 174, 182) selected from two MS clinical trials. We segment all white matter lesions in each scan with a semi-automatic method to produce binary images of lesion voxels, quantify their spatial dispersion using the defined measures, then perform a statistical analysis to compare the dispersion values to total lesion volume and patient disability. We use linear and rank correlations to investigate the relationships between dispersion, disability, and total lesion volume, and regression analysis to investigate whether there is a potentially meaningful relationship between dispersion and disability, independent of volume. Our main finding is that one distance based measure, Euclidean distance from a fixed point, consistently correlates with disability score across all three datasets, and has predictive value that is at least partly independent of lesion volume. The results provide support for our hypothesis and suggest that a potentially meaningful relationship exists between patient disability and measurements of lesion dispersion. Finding such relationships can improve the understanding of MS and potentially lead to the discovery of novel surrogate biomarkers for clinical use in designing treatment trials and providing prognostic advice to individual patients.