Michael Friedlander

Professor

Relevant Thesis-Based Degree Programs

 
 

Graduate Student Supervision

Doctoral Student Supervision

Dissertations completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest dissertations.

Duality in structured and federated optimization: theory and applications (2022)

The dual approach in mathematical optimization refers to a class of techniques for tackling a dual problem that arises from the original problem. Numerous notable improvements in strengthening the dual approach have been promoted in the last two decades because of its superior performance for many large-scale optimization problems. In this thesis, we investigate and extend the dual approach to two classes of optimization problems: structured optimization, whose solutions exhibit specific structures, and federated learning, which aims to collaboratively learn a model from decentralized data sources. In the first part of this thesis, we characterize the dual correspondence in structured optimization. We further show that this dual correspondence allows us to develop efficient algorithms and design new models for structured optimization. In the second part of this thesis, we propose a dual approach to the federated optimization problem. We demonstrate theoretically and empirically that our approach enjoys better convergence guarantees than other primal-based approaches under specific scenarios. We also explore some application scenarios for structured optimization in federated learning. In the third part of this thesis, we study the problem of evaluating clients' contributions in federated learning. We propose fair and efficient contribution valuation metrics for both horizontal and vertical federated learning, where structured optimization plays a crucial role in our design.

View record

First-order methods for structured optimization (2021)

First-order methods are gaining substantial interest in the past two decades because of their superior performance in solving today's large-scale problems. In this thesis, we study some widely used first-order methods for problems that satisfy certain structures. Specifically, in the first part, we contribute to coordinate optimization and show that greedy coordinate descent (GCD) has an implicit screening ability that usually selects coordinates that are nonzero at the solution, which explains why GCD works exceptionally well for problems that admit sparse solutions. We also extend the elegant safe-screening rule that depends on duality gap to atomic-norm regularized problems. In the second part, we study online mirror descent (OMD) with unknown time horizon and unbounded domain, which is known to suffer from linear regret. We provide a stabilization technique and show that the stabilized-OMD can achieve sublinear regret. We also build the connection between stabilized-OMD and dual averaging. In the third part, we derive improved iteration complexity of the stochastic subgradient method for over-parameterized models that satisfy an interpolation condition. The obtained iteration complexity matches the rate of the stochastic gradient method applied to smooth problems that also satisfy an interpolation condition. Our analysis partially explains the empirical observation that nonsmoothness in modern machine learning models sometimes does not slow down the training process.

View record

Gauge Duality and Low-Rank Spectral Optimization (2016)

The emergence of compressed sensing and its impact on various applications in signal processing and machine learning has sparked an interest in generalizing its concepts and techniques to inverse problems that involve quadratic measurements. Important recent developments borrow ideas from matrix lifting techniques in combinatorial optimization and result in convex optimization problems characterized by solutions with very low rank, and by linear operators that are best treated with matrix-free approaches. Typical applications give rise to enormous optimization problems that challenge even the very best workhorse algorithms and numerical solvers for semidefinite programming.The work presented in this thesis focuses on the class of low-rank spectral optimization problems and its connection with a theoretical duality framework for gauge functions introduced in a seminal paper by Freund (1987). Through this connection, we formulate a related eigenvalue optimization problem more amenable to the design of specialized algorithms that scale well and can be used in practical applications.We begin by exploring the theory of gauge duality focusing on a slightly specialized structure often encountered in the motivating inverse problems. These developments are still made in a rather abstract form, thus allowing for its application to different problem classes.What follows is a connection of this framework with two important classes of spectral optimization problems commonly found in the literature: trace minimization in the cone of positive semidefinite matrices and affine nuclear norm minimization. This leads us to a convex eigenvalue optimization problem with rather simple constraints, and involving a number of variables equal to the number of measurements, thus with dimension far smaller than the primal.The last part of this thesis exploits a sense of strong duality between the primal-dual pair of gauge problems to characterize their solutions and to devise a method for retrieving a primal minimizer from a dual one. This allows us to design and implement a proof of concept solver which compares favorably against solvers designed specifically for the PhaseLift formulation of the celebrated phase recovery problem and a scenario of blind image deconvolution.

View record

Convex Optimization for Sparse Recovery (2010)

The past decade has witnessed the emergence of compressed sensing asa way of acquiring sparsely representable signals in a compressedform. These developments have greatly motivated research in sparsesignal recovery, which lies at the heart of compressed sensing, andwhich has recently found its use in altogether new applications.In the first part of this thesis we study the theoretical aspects ofjoint-sparse recovery by means of sum-of-norms minimization, and theReMBo-l₁ algorithm, which combines boosting techniques withl₁-minimization. For the sum-of-norms approach we derivenecessary and sufficient conditions for recovery, by extendingexisting results to the joint-sparse setting. We focus in particularon minimization of the sum of l₁, and l₂ norms, and giveconcrete examples where recovery succeeds with one formulation butnot with the other. We base our analysis of ReMBo-l₁ on itsgeometrical interpretation, which leads to a study of orthantintersections with randomly oriented subspaces. This workestablishes a clear picture of the mechanics behind the method, andexplains the different aspects of its performance.The second part and main contribution of this thesis is thedevelopment of a framework for solving a wide class of convexoptimization problems for sparse recovery. We provide a detailedaccount of the application of the framework on several problems, butalso consider its limitations. The framework has been implemented inthe SPGL1 algorithm, which is already well established as aneffective solver. Numerical results show that our algorithm isstate-of-the-art, and compares favorably even with solvers for theeasier---but less natural---Lagrangian formulations.The last part of this thesis discusses two supporting softwarepackages: Sparco, which provides a suite of test problems forsparse recovery, and Spot, a Matlab toolbox for the creation andmanipulation of linear operators. Spot greatly facilitates rapidprototyping in sparse recovery and compressed sensing, where linearoperators form the elementary building blocks.Following the practice of reproducible research, all code used forthe experiments and generation of figures is available online athttp://www.cs.ubc.ca/labs/scl/thesis/09vandenBerg/.

View record

Master's Student Supervision

Theses completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest theses.

AtomicSets.jl: the calculus of support functions in Julia (2022)

Atomic sets are a powerful abstraction of sparsity to more general notions of structure. We present here a software implementation of atomic sets and support functions in the Julia programming language. Previous work in compressed sensing gives us a calculus for support functions under sums and applications of linear maps to atomic sets. We discuss this calculus, how it is reflected in software, and its applications. Emphasis is placed on how the calculus can be represented in a way that allows for both ease of use as a software library and high performance in applications, and how the Julia programming language allows this.

View record

An accelerated dual method for SPGL1 (2019)

This thesis studies a widely-used solver SPGL1, which applies a general root-finding process to solve the basis pursuit denoising problem. This process involves a nested loop. The outer loop is an inexact Newton root-finding process, and the inner loop approximately solves LASSO (least absolute shrinkage and selection operator). We propose an accelerated dual method to accelerate the inner loop by optimizing the dual problem of LASSO on a low-dimensional space. Experimental results show that our accelerated method that can successfully reduce the total iteration counts. Our future work is to reduce the total running time of our accelerated method.

View record

Bundle-type methods for dual atomic pursuit (2019)

We discuss a type of structured optimization problem called atomic pursuit, where the aim is to assemble a solution, using a given set of (possibly uncountably infinite) atoms, to fit a model to data, popularized in compressed sensing and machine learning. The solutions to atomic pursuit problems can be formed as the sum of a few atoms, which means they lie on low-dimensional facets of the convex hull of the atomic set and the gauges induced by the convex hull of the atomic sets have been shown to promote atomic sparsity. Examples include well-studied cases such as one norm promoting general sparsity and nuclear norm promoting low rankness. In this thesis, we examine the gauge dual of atomic pursuit and show that the support of the primal solution can be recovered from the dual solution. In particular, a two-stage algorithm based on gauge duality and bundle-type methods is proposed. The first stage discovers the optimal atomic support for the primal problem by solving a sequence of approximations of the dual problem using a bundle-type method, which geometrically can be seen as producing a sequence of inscribing subsets of the atomic set. The second stage recovers the approximate primal solution using the atoms discovered in the first stage. We show that these methods lead to efficient and scalable semidefinite optimization methods with low-rank solutions, producing a fast, scalable, SDP solver for phase retrieval and graph problems.

View record

Nonlinearly constrained optimization via sequential regularized linear programming (2010)

This thesis proposes a new active-set method for large-scale nonlinearly constrained optimization. The method solves a sequence of linear programs togenerate search directions. The typical approach for globalization is based ondamping the search directions with a trust-region constraint; our proposed approach is instead based on using a 2-norm regularization term in the objective.Numerical evidence is presented which demonstrates scaling inefficienciesin current sequential linear programming algorithms that use a trust-regionconstraint. Specifically, we show that the trust-region constraints in the trustregionsubproblems significantly reduce the warm-start efficiency of these subproblemsolves, and also unnecessarily induce infeasible subproblems. We alsoshow that the use of a regularized linear programming (RLP) step largely eliminates these inefficiencies and, additionally, that the dual problem to RLP isa bound-constrained least-squares problem, which may allow for very efficientsubproblem solves using gradient-projection-type algorithms.Two new algorithms were implemented and are presented in this thesis,based on solving sequences of RLPs and trust-region constrained LPs. Thesealgorithms are used to demonstrate the effectiveness of each type of subproblem,which we extrapolate onto the effectiveness of an RLP-based algorithm with theaddition of Newton-like steps.All of the source code needed to reproduce the figures and tables presentedin this thesis is available online athttp: //www.cs.ubc.ca/labs/scl/thesis/lOcrowe/

View record

Current Students & Alumni

This is a small sample of students and/or alumni that have been supervised by this researcher. It is not meant as a comprehensive list.
 
 

If this is your researcher profile you can log in to the Faculty & Staff portal to update your details and provide recruitment preferences.

 
 

Get key application advice, hear about the latest research opportunities and keep up with the latest news from UBC's graduate programs.