Lele Wang

Assistant Professor

Research Interests

Mathematics of data science
Machine Learning
information theory
Coding theory
Combinatorics
Communication theory
Random graphs

Relevant Thesis-Based Degree Programs

Research Options

I am available and interested in collaborations (e.g. clusters, grants).
I am interested in and conduct interdisciplinary research.
I am interested in working with undergraduate students on research projects.
 
 

Recruitment

Master's students
Doctoral students
Any time / year round

Complete these steps before you reach out to a faculty member!

Check requirements
  • Familiarize yourself with program requirements. You want to learn as much as possible from the information available to you before you reach out to a faculty member. Be sure to visit the graduate degree program listing and program-specific websites.
  • Check whether the program requires you to seek commitment from a supervisor prior to submitting an application. For some programs this is an essential step while others match successful applicants with faculty members within the first year of study. This is either indicated in the program profile under "Admission Information & Requirements" - "Prepare Application" - "Supervision" or on the program website.
Focus your search
  • Identify specific faculty members who are conducting research in your specific area of interest.
  • Establish that your research interests align with the faculty member’s research interests.
    • Read up on the faculty members in the program and the research being conducted in the department.
    • Familiarize yourself with their work, read their recent publications and past theses/dissertations that they supervised. Be certain that their research is indeed what you are hoping to study.
Make a good impression
  • Compose an error-free and grammatically correct email addressed to your specifically targeted faculty member, and remember to use their correct titles.
    • Do not send non-specific, mass emails to everyone in the department hoping for a match.
    • Address the faculty members by name. Your contact should be genuine rather than generic.
  • Include a brief outline of your academic background, why you are interested in working with the faculty member, and what experience you could bring to the department. The supervision enquiry form guides you with targeted questions. Ensure to craft compelling answers to these questions.
  • Highlight your achievements and why you are a top student. Faculty members receive dozens of requests from prospective students and you may have less than 30 seconds to pique someone’s interest.
  • Demonstrate that you are familiar with their research:
    • Convey the specific ways you are a good fit for the program.
    • Convey the specific ways the program/lab/faculty member is a good fit for the research you are interested in/already conducting.
  • Be enthusiastic, but don’t overdo it.
Attend an information session

G+PS regularly provides virtual sessions that focus on admission requirements and procedures and tips how to improve your application.

 

ADVICE AND INSIGHTS FROM UBC FACULTY ON REACHING OUT TO SUPERVISORS

These videos contain some general advice from faculty across UBC on finding and reaching out to a potential thesis supervisor.

Graduate Student Supervision

Master's Student Supervision

Theses completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest theses.

Convergence to Nash in the potential linear quadratic games and accelerated learning in games (2022)

Game theory and online optimization have a close relationship with each other. In some literature, online optimization has been employed for solving game theory problems. Many accelerated algorithms are proposed for offline optimization problems. However, to the best of our knowledge, there is not enough work done to accelerate zero-order online optimization. The goal is to propose a Nesterov accelerated online algorithm with the hope that it will converge to the Nash, with a fast convergence rate in Cournot games and Quadratic games. It is desired that this online algorithm also minimize the regret of a sequence of functions for both zero-order and first-order feedback.In potential Linear Quadratic (LQ) games, we also study the convergence of the policy gradient algorithms, a class of conventional reinforcement learning methods. LQ games have applications in engineering. It has been shown that using policy gradient algorithms by agents does not guarantee convergence to the Nash equilibrium. However, in the LQR problem, which is essentially a one-player LQ game, the policy gradient converges to the optimum. In this work, we show that using policy gradient algorithms leads to convergence to Nash equilibrium in potential LQ games. Additionally, we identify the characteristics of potential games in both open-loop and closed-loop settings. We will demonstrate that the class of closed-loop potential games is generally trivial, and if we put restrictions on players' actions, we can have non-trivial potential games too.

View record

On the information-theoretic limits of attributed graph alignment (2022)

Graph alignment aims at recovering the vertex correspondence between two correlated graphs, which is a task that frequently occurs in graph mining applications such as social network analysis, computational biology, etc. Existing studies on graph alignment mostly identify the vertex correspondence by exploiting the graph structure similarity. However, in many real-world applications, additional information attached to individual vertices, such as the user profiles in social networks, might be publicly available. In this thesis, we consider the attributed graph alignment problem, where additional information attached to individuals, referred to as attributes, is incorporated to assist graph alignment. We establish both the achievability and converse results on recovering vertex correspondence exactly, where the conditions match for a wide range of practical regimes. Our results span the full spectrum between models that only consider graph structure similarity and models where only attribute information is available.

View record

Soft BIBD and product gradient codes (2022)

Due to recent increases in the size of available training data, a variety of machine learning tasks are distributed across multiple computing nodes. However, the theoretical speedup from distributing computations may not be achieved in practice due to slow or unresponsive computing nodes, known as stragglers. Gradient coding is a coding theoretic framework to provide robustness against stragglers in distributed machine learning applications. Recently, Kadhe et al. proposed a gradient code based on a combinatorial design, called balanced incomplete block design (BIBD), which is shown to outperform many existing gradient codes in worst-case straggling scenarios [1]. However, parameters for which such BIBD constructions exist are very limited [2]. In this paper, we aim to overcome such limitations and construct gradient codes which exist for a wide range of system parameters while retaining the superior performance of BIBD gradient codes. Two such constructions are proposed, one based on a probabilistic construction that relax the stringent BIBD gradient code constraints, and the other based on taking the Kronecker product of existing gradient codes. The proposed gradient codes allow flexible choices of system parameters while retaining comparable error performance.

View record

Universal graph compression: stochastic block models (2021)

Motivated by the prevalent data science applications of processing and mining large-scale graph data such as social networks, web graphs, and biological networks, as well as the high I/O and communication costs of storing and transmitting such data, this thesis investigates lossless compression of data appearing in the form of a labeled graph. In particular, we consider a widely used random graph model, stochastic block model (SBM), which captures the clustering effects in social networks. An information-theoretic universal compression framework is applied, in which one aims to design a single compressor that achieves the asymptotically optimal compression rate, for every SBM distribution, without knowing the parameters of the SBM that generates the data. Such a graph compressor is proposed in this thesis, which universally achieves the optimal compression rate for a wide class of SBMs with edge probabilities ranging from $O(1)$ to $\Omega(1/n^{2-\e})$ for any $0
View record

 
 

If this is your researcher profile you can log in to the Faculty & Staff portal to update your details and provide recruitment preferences.

 
 

Follow these steps to apply to UBC Graduate School!