Relevant Degree Programs
Financial risk analytics
Multivariate extreme value theory
Complete these steps before you reach out to a faculty member!
- Familiarize yourself with program requirements. You want to learn as much as possible from the information available to you before you reach out to a faculty member. Be sure to visit the graduate degree program listing and program-specific websites.
- Check whether the program requires you to seek commitment from a supervisor prior to submitting an application. For some programs this is an essential step while others match successful applicants with faculty members within the first year of study. This is either indicated in the program profile under "Admission Information & Requirements" - "Prepare Application" - "Supervision" or on the program website.
- Identify specific faculty members who are conducting research in your specific area of interest.
- Establish that your research interests align with the faculty member’s research interests.
- Read up on the faculty members in the program and the research being conducted in the department.
- Familiarize yourself with their work, read their recent publications and past theses/dissertations that they supervised. Be certain that their research is indeed what you are hoping to study.
- Compose an error-free and grammatically correct email addressed to your specifically targeted faculty member, and remember to use their correct titles.
- Do not send non-specific, mass emails to everyone in the department hoping for a match.
- Address the faculty members by name. Your contact should be genuine rather than generic.
- Include a brief outline of your academic background, why you are interested in working with the faculty member, and what experience you could bring to the department. The supervision enquiry form guides you with targeted questions. Ensure to craft compelling answers to these questions.
- Highlight your achievements and why you are a top student. Faculty members receive dozens of requests from prospective students and you may have less than 30 seconds to pique someone’s interest.
- Demonstrate that you are familiar with their research:
- Convey the specific ways you are a good fit for the program.
- Convey the specific ways the program/lab/faculty member is a good fit for the research you are interested in/already conducting.
- Be enthusiastic, but don’t overdo it.
G+PS regularly provides virtual sessions that focus on admission requirements and procedures and tips how to improve your application.
Great Supervisor Week Mentions
My supervisors are extremely supportive. Natalia is a wonderful supervisor as she has a level of emotional intelligence that I have not seen in most people. Both her and Harry have always put my goals at the forefront and helped me achieve them. Their doors have always been open to me and they have been extremely generous with their time and their advice. I have learned so much from working with them throughout my time in my master's program. It has been an immense pleasure and I look forward to having an opportunity to collaborate with them in the future.
Graduate Student Supervision
Doctoral Student Supervision (Jan 2008 - May 2021)
Forecasts of extreme events are useful in order to prepare for disaster. Such forecasts are usefully communicated as an upper quantile function, and in the presence of predictors, can be estimated using quantile regression techniques. This dissertation proposes methodology that seeks to produce forecasts that (1) are consistent in the sense that the quantile functions are valid (non-decreasing); (2) are flexible enough to capture the dependence between the predictors and the response; and (3) can reliably extrapolate into the tail of the upper quantile function. To address these goals, a family of proper scoring rules is first established that measure the goodness of upper quantile function forecasts. To build a model of the conditional quantile function, a method that uses pair-copula Bayesian networks or vine copulas is proposed. This model is fit using a new class of estimators called the composite nonlinear quantile regression (CNQR) family of estimators, which optimize the scores from the previous scoring rules. In addition, a new parametric copula family is introduced that allows for a non-constant conditional extreme value index, and another parametric family is introduced that reduces a heavy-tailed response to a light tail upon conditioning. Taken together, this work is able to produce forecasts satisfying the three goals. This means that the resulting forecasts of extremes are more reliable than other methods, because they more adequately capture the insight that predictors hold on extreme outcomes. This work is applied to forecasting extreme flows of the Bow River at Banff, Alberta, for flood preparation, but can be used to forecast extremes of any continuous response when predictors are present.
Master's Student Supervision (2010 - 2020)
The global financial crisis of 2007-2009 revealed the importance of systemic risk: the risk that may destabilize the global economy due to financial contagion. Accurate assessment of systemic risk would not only enable regulators to introduce suitable policies to mitigate the risk, but also allow individual institutions to monitor and mitigate their vulnerability. An effective measure of systemic risk should be able to capture the co-movements between a financial system (or market) and individual financial institutions. One popular measure of systemic risk is CoVaR. In this thesis, a methodology is proposed to compute dynamic forecasts of CoVaR semi-parametrically within the classical framework of multivariate extreme value theory (EVT). According to the definition, CoVaR can be viewed as a high quantile of a conditional distribution where the conditioning event corresponds to large losses of an institution. The idea of our methodology is to relate this conditional distribution to the tail dependence function. We develop an EVT-based framework to estimate CoVaR statically by combining parametric modelling of the tail dependence function to address the issue of data sparsity in the joint tail regions and semi-parametric univariate tail estimation techniques. The performance of the methodology is illustrated via simulation studies and real data examples.
Banks that use the advanced measurement approach to model operational risk may struggle to develop an internal process that produces stable regulatory capital over time. Large decreases in regulatory capital are scrutinized by regulators while large increases may force banks to set aside more assets than necessary. A major source of this instability arises from the loss severity selection process, especially when the selected distribution families for severity risk categories change year-to-year. In this report, we examine the process of selecting severity distributions from a candidate distribution list within the guidelines of the advanced measurement approach, propose useful tools to aid in selecting an appropriate severity distribution, and analyze the effect of selection criteria on regulatory capital. The log sinh-arcsinh distribution family is added to a list of common candidate severity distributions used by industry. This 4-parameter family solves issues introduced by the 4-parameter g-and-h distribution without sacrificing flexibility and shows promise in outperforming 2-parameter families, reducing the frequency of severity distribution families changing year-to-year. Distribution parameters are estimated using the maximum likelihood approach from loss data truncated at a known minimum reporting threshold. Our severity distribution selection process combines truncation probability estimates with Akaike Information Criterion (AIC), Bayesian Information Criterion, modified Anderson-Darling, QQ-plots, and predictive measures such as the quantile scoring function and out-of-sample AIC, and we discuss some of the challenges associated with this process. We then simulate operational losses and calculate regulatory capital, comparing the effect on regulatory capital of selecting loss severity distributions using AIC versus quantile score. A combination of these two criteria is recommended when selecting loss severity distributions.
Estimation of multivariate quantile regions with very small probabilities, referred toas risk regions in this report, plays an important part in various applications. Yet,it is a difficult problem since such regions contain hardly any or no data. Existingmethods address the problem only for heavy-tailed distributions or for bivariatedistributions with non-degenerate exponent measure that do not include the casesof asymptotic independence.In this report, we propose a more flexible framework to supplement existingapproaches to risk region estimation by allowing tails of the underlying distributionto be light as well as covering situations of tail independence. In particular,we concentrate on a class of distributions assumed to have a density function withhomothetic level sets. In simulation studies, reasonable performance of our proposedmethod is demonstrated. We also present two real-life applications to furtherillustrate the flexibility and performance of our method.
The Autoregressive Stochastic Volatility (ARSV) model is a discrete-time stochastic volatility model that can model the financial returns time series and volatilities. This model is relevant for risk management. However, existing inference methods have various limitations on model assumptions. In this report we discuss a new inference method that allows flexible model assumption for innovation of the ARSV model. We also present the application of ARSV model to risk management, and compare the ARSV model with another commonly used model for financial time series, namely the GARCH model.
Forecasting of risk measures is an important part of risk management for financial institutions. Value-at-Risk and Expected Shortfall are two commonly used risk measures and accurately predicting these risk measures enables financial institutions to plan adequately for possible losses. Point forecasts from different methods can be compared using consistent scoring functions, provided the underlying functional to be forecasted is elicitable. It has been shown that the choice of a scoring function from the family of consistent scoring functions does not influence the ranking of forecasting methods as long as the underlying model is correctly specified and nested information sets are used. However, in practice, these conditions do not hold, which may lead to discrepancies in the ranking of methods under different scoring functions.We investigate the choice of scoring functions in the face of model misspecification, parameter estimation error and nonnested information sets. We concentrate on the family of homogeneous consistent scoring functions for Value-at-Risk and the pair of Value-at-Risk and Expected Shortfall and identify conditions required for existence of the expectation of these scoring functions. We also assess the finite-sample properties of the Diebold-Mario Test, as well as examine how these scoring functions penalize for over-prediction and under-prediction with the aid of simulation studies.
The project focuses on the estimation of the probability distribution of a bivariate random vector given that one of the components takes on a large value. These conditional probabilities can be used to quantify the effect of financial contagion when the random vector represents losses on financial assets and as a stress-testing tool in financial risk management. However, it is tricky to quantify these conditional probabilities when the main interest lies in the tails of the underlying distribution. Specifically, empirical probabilities fail to provide adequate estimates while fully parametric methods are subject to large model uncertainty as there is too little data to assess the model fit in the tails.We propose a semi-parametric framework using asymptotic results in the spirit of extreme values theory. The main contributions include an extension of the limit theorem in Abdous et al. [Canad. J. Statist. 33 (2005)] to allow for asymmetry, frequently encountered in financial and insurance applications, and a new approach for inference. The results are illustrated using simulations and two applications in finance.