Relevant Degree Programs
Complete these steps before you reach out to a faculty member!
- Familiarize yourself with program requirements. You want to learn as much as possible from the information available to you before you reach out to a faculty member. Be sure to visit the graduate degree program listing and program-specific websites.
- Check whether the program requires you to seek commitment from a supervisor prior to submitting an application. For some programs this is an essential step while others match successful applicants with faculty members within the first year of study. This is either indicated in the program profile under "Requirements" or on the program website.
- Identify specific faculty members who are conducting research in your specific area of interest.
- Establish that your research interests align with the faculty member’s research interests.
- Read up on the faculty members in the program and the research being conducted in the department.
- Familiarize yourself with their work, read their recent publications and past theses/dissertations that they supervised. Be certain that their research is indeed what you are hoping to study.
- Compose an error-free and grammatically correct email addressed to your specifically targeted faculty member, and remember to use their correct titles.
- Do not send non-specific, mass emails to everyone in the department hoping for a match.
- Address the faculty members by name. Your contact should be genuine rather than generic.
- Include a brief outline of your academic background, why you are interested in working with the faculty member, and what experience you could bring to the department. The supervision enquiry form guides you with targeted questions. Ensure to craft compelling answers to these questions.
- Highlight your achievements and why you are a top student. Faculty members receive dozens of requests from prospective students and you may have less than 30 seconds to pique someone’s interest.
- Demonstrate that you are familiar with their research:
- Convey the specific ways you are a good fit for the program.
- Convey the specific ways the program/lab/faculty member is a good fit for the research you are interested in/already conducting.
- Be enthusiastic, but don’t overdo it.
G+PS regularly provides virtual sessions that focus on admission requirements and procedures and tips how to improve your application.
Graduate Student Supervision
Doctoral Student Supervision (Jan 2008 - Nov 2019)
Myelin is an essential component of nerve fibers and monitoring its health is important for studying neurological diseases that attack myelin, such as multiple sclerosis. The amount of water trapped within myelin, which is a surrogate for myelin content and integrity, can be measured in vivo using MRI relaxation techniques that acquire a series of images at multiple echo times to produce a T₂ decay curve at each voxel. These curves are then analyzed, most commonly using non-negative least squares (NNLS) fitting, to produce T₂ distributions from which water measurements are made. T₂ decay analysis using NNLS has two main challenges: instability and high computational demands. The main contributions of this thesis are: 1) we propose a new regularization algorithm in which local and non-local information is gathered and used adaptively for each voxel and 2) we propose a hybrid utilization method of multicore CPUs and GPUs to improve the efficiency of a T₂ decay analysis, with consideration of the increased computational burden of regularization and careful analysis of which algorithmic components would benefit from multicore CPU vs. GPU parallelization. Our results demonstrated that the proposed regularization method provided more globally consistent myelin water measurements, yet preserved fine structures. Our experiment with real patient data suggested that the algorithm improved the reproducibility and the ability to distinguish between the myelin maps of multiple sclerosis patients and healthy subjects. We also demonstrated our optimized implementation’s performance by comparing with a parallelized implementation written in MATLAB which is the most commonly used platform for the analysis of T₂ relaxation data. We found an improvement in speed of over 4× when computing single seven-slice myelin map and over 14× for a batch processing using the same number of CPU cores.
Deep learning methods have shown great success in many research areas such asobject recognition, speech recognition, and natural language understanding, dueto their ability to automatically learn a hierarchical set of features that istuned to a given domain and robust to large variability. This motivates the useof deep learning for neurological applications, because the large variability inbrain morphology and varying contrasts produced by different MRI scanners makesthe automatic analysis of brain images challenging.However, 3D brain images pose unique challenges due to their complex contentand high dimensionality relative to the typical number of images available,making optimization of deep networks and evaluation of extracted featuresdifficult. In order to facilitate the training on large 3D volumes, we havedeveloped a novel training method for deep networks that is optimizedfor speed and memory. Our method performs training of convolutional deep beliefnetworks and convolutional neural networks in the frequency domain, whichreplaces the time-consuming calculation of convolutions with element-wisemultiplications, while adding only a small number of Fourier transforms.We demonstrate the potential of deep learning for neurological image analysisusing two applications. One is the development of a fully automatic multiplesclerosis (MS) lesion segmentation method based on a new type of convolutionalneural network that consists of two interconnected pathways for featureextraction and lesion prediction. This allows for the automatic learning offeatures at different scales that are optimized for accuracy for any givencombination of image types and segmentation task. Our network also uses a novelobjective function that works well for segmenting underrepresented classes, suchas MS lesions. The other application is the development of a statistical modelof brain images that can automatically discover patterns of variability in brainmorphology and lesion distribution. We propose building such a model using adeep belief network, a layered network whose parameters can be learned fromtraining images. Our results show that this model can automatically discover theclassic patterns of MS pathology, as well as more subtle ones, and that theparameters computed have strong relationships to MS clinical scores.
Master's Student Supervision (2010 - 2018)
The human cerebral cortex is a deep folded structure of grey matter and is responsible for maintaining cognitive functions. The cortical thickness has emerged as an important surrogate biomarker in many neurodegenerative diseases. In this thesis, we propose a longitudinal method called LCT (Longitudinal Cortical Thickness) for measuring cortical thickness changes over time in magnetic resonance scans. We adopt a voxel-based approach rather than surface-based to gain computational efficiency and sensitivity to subtle changes but aim to achieve the scan-rescan reproducibility of surface-based methods. The existing surface-based methods are very time-consuming and the topological and smoothness constraints imposed during surface reconstruction may blunt longitudinal sensitivity. Also, longitudinal processing requires establishing anatomical correspondence across time, and most current methods use deformable registration for this purpose, which requires setting many parameters that can directly affect the measurements. In contrast, LCT establishes cortex-specific matches by using three intuitive features: normal direction, spatial coordinates and shape context that are defined on the cortical skeleton. We also introduce the concept of fuzzy correspondence, which allows a skeletal point in one scan to be partially matched to multiple points in another scan, thereby enhancing the stability of the matches. LCT was evaluated using three longitudinal datasets: 1) same-day pairs of scans of 15 subjects to test for scan-rescan reproducibility, 2) paired scans of 50 Alzheimer’s disease (AD) and 50 healthy subjects taken one-year apart, to test our method’s ability to distinguish between AD and normal, and 3) paired scans of 100 secondary progressive multiple sclerosis (MS) subjects taken two years apart, to test for sensitivity to change over time. Tests on the scan-rescan dataset show that LCT is comparable in scan-rescan reproducibility to two other state-of-the-art methods: FreeSurfer and minimum line integral (MLI). Tests on the AD dataset show that LCT detected larger group differences between AD and normal subjects than FreeSurfer and MLI. Further, the results on the MS dataset demonstrate that LCT is more sensitive to change over time and produced stronger correlations between cortical thickness changes and changes in clinical scores than FreeSurfer, which did not produce any statistically significant correlations.
Spinal cord atrophy is a valuable biomarker in multiple sclerosis (MS) for its significant correlation with physical disability. Measurement of spinal cord atrophy on MRI may be possibly confounded by fluctuations in water content, and the high measurement variance in previous longitudinal studies can be possibly reduced by registration-based methods. In this thesis, we investigated the effect of change in water content due to hydration status on cord cross-sectional area (CSA) measurement, and the applicability of three registration-based methods for longitudinal cord atrophy measurement. Our first hypothesis is that dehydration can decrease the cord CSA measurement on MRI. We found a mean decrease of 0.65% in CSA on scans collected from ten controls following a dehydration protocol using two independent cross-sectional CSA measurement methods. Our result demonstrates that change in water content of the cord is associated with measurable change in cord CSA. The second main hypothesis is that registration-based methods can decrease the variance in longitudinal cord atrophy measurement by using the signal from multiple scans to improve robustness to image noise and artifacts and by regularization of the registration to constrain the degrees of freedom. We implemented three algorithms: boundary shift integral based on rigid registration, Jacobian integration based on deformable registration and scale factor computation based on constrained registration (composed of rigid and scale transformation). We evaluated the three registration-based methods by comparing them to two cross-sectional methods, as applied to three longitudinal data sets: 1) images with simulated cord atrophy; 2) images acquired in the dehydration study described above; and 3) images of 15 MS patients over a two-year interval. Our main result was that while registration-based methods achieved more accurate results on simulation data sets and overall smaller measurement variance, they were not as sensitive, reporting no dehydration effect and smaller magnitude of patient cord atrophy. We argue that the limited spatial resolution of 1mm of MR scans in our experiment is possibly the main reason and future studies of cord atrophy measurement using registration-based methods should be conducted on MR scans with a high spatial resolution such as 0.5mm.
Many previous studies in multiple sclerosis (MS) have focused on the relationship between white matter lesion volume and clinical parameters, but few have investigated the independent contribution of the spatial dispersion of lesions to patient disability.In this thesis, we investigate whether a mathematical measure of the 3D spatial dispersion of lesions can reveal clinical significance that is independent of volume. Our hypothesis is that for any two given patients with similar lesion loads, the one with greater lesion dispersion would tend to have a greater disability. We investigate four different approaches for quantifying lesion dispersion and examine the ability of these lesion dispersion measures to act as potential surrogate markers of disability. We propose one connectedness-based measure (compactness), two region-based measures (ratio of minimum bounding spheres and ratio of lesion convex hull to the brain volume), two distance-based measures (Euclidean distance from a fixed point and pair-wise Euclidean distances) and one measure based on network theory (small-worldness). Our data include three sets of MRIs (n = 24, 174, 182) selected from two MS clinical trials. We segment all white matter lesions in each scan with a semi-automatic method to produce binary images of lesion voxels, quantify their spatial dispersion using the defined measures, then perform a statistical analysis to compare the dispersion values to total lesion volume and patient disability. We use linear and rank correlations to investigate the relationships between dispersion, disability, and total lesion volume, and regression analysis to investigate whether there is a potentially meaningful relationship between dispersion and disability, independent of volume. Our main finding is that one distance based measure, Euclidean distance from a fixed point, consistently correlates with disability score across all three datasets, and has predictive value that is at least partly independent of lesion volume. The results provide support for our hypothesis and suggest that a potentially meaningful relationship exists between patient disability and measurements of lesion dispersion. Finding such relationships can improve the understanding of MS and potentially lead to the discovery of novel surrogate biomarkers for clinical use in designing treatment trials and providing prognostic advice to individual patients.