Kevin Leyton-Brown

Professor

Research Classification

Research Interests

Artificial Intelligence
Algorithms
theoretical computer science
Resource Allocation
Computer Science and Statistics
Algorithms
Artificial Intelligence
Auction theory
game theory
Machine Learning

Relevant Thesis-Based Degree Programs

 
 

Research Methodology

machine learning
Game theory

Recruitment

Master's students
Doctoral students
2021

Complete these steps before you reach out to a faculty member!

Check requirements
  • Familiarize yourself with program requirements. You want to learn as much as possible from the information available to you before you reach out to a faculty member. Be sure to visit the graduate degree program listing and program-specific websites.
  • Check whether the program requires you to seek commitment from a supervisor prior to submitting an application. For some programs this is an essential step while others match successful applicants with faculty members within the first year of study. This is either indicated in the program profile under "Admission Information & Requirements" - "Prepare Application" - "Supervision" or on the program website.
Focus your search
  • Identify specific faculty members who are conducting research in your specific area of interest.
  • Establish that your research interests align with the faculty member’s research interests.
    • Read up on the faculty members in the program and the research being conducted in the department.
    • Familiarize yourself with their work, read their recent publications and past theses/dissertations that they supervised. Be certain that their research is indeed what you are hoping to study.
Make a good impression
  • Compose an error-free and grammatically correct email addressed to your specifically targeted faculty member, and remember to use their correct titles.
    • Do not send non-specific, mass emails to everyone in the department hoping for a match.
    • Address the faculty members by name. Your contact should be genuine rather than generic.
  • Include a brief outline of your academic background, why you are interested in working with the faculty member, and what experience you could bring to the department. The supervision enquiry form guides you with targeted questions. Ensure to craft compelling answers to these questions.
  • Highlight your achievements and why you are a top student. Faculty members receive dozens of requests from prospective students and you may have less than 30 seconds to pique someone’s interest.
  • Demonstrate that you are familiar with their research:
    • Convey the specific ways you are a good fit for the program.
    • Convey the specific ways the program/lab/faculty member is a good fit for the research you are interested in/already conducting.
  • Be enthusiastic, but don’t overdo it.
Attend an information session

G+PS regularly provides virtual sessions that focus on admission requirements and procedures and tips how to improve your application.

 

ADVICE AND INSIGHTS FROM UBC FACULTY ON REACHING OUT TO SUPERVISORS

These videos contain some general advice from faculty across UBC on finding and reaching out to a potential thesis supervisor.

Graduate Student Supervision

Doctoral Student Supervision

Dissertations completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest dissertations.

Computational tools for complex electronic auctions (2024)

This thesis has two main concerns. The first is infrastructure that allows complex, electronic markets to function, ranging from web applications to highly specialized clearing algorithms. The second is developing computational methods to assess alternative market designs. I describe efforts developing and deploying computational infrastructure in support of markets in two very different domains: subsistence agriculture and radio spectrum allocation. I detail practical experiences (a) running a feature-phone based marketplace for agricultural trade built to match farmers with traders in developing countries, and (b) designing a solver to overcome the computational challenge of station repacking in the recent US "incentive" spectrum auction. I then present a series of three computational methods for evaluating alternative market designs, beginning with a setting where plausible models of bidding behavior are known, then relaxing this assumption and studying single-action and later sequential games.

View record

Architectures and learning algorithms for data-driven decision making (2021)

To design good policy, we need accurate models of how the decision makers that operate within a given system will respond to policy changes. For example, an economist reasoning about the design of an auction needs a model of human behavior in order to predict how changes to the auction design will be reflected in outcomes; or a doctor deciding on treatments needs a model of people's health responses under different treatments to select the best treatment policy. We would like to leverage the accuracy of modern deep learning approaches to estimate these models, but this setting brings two non-standard challenges. First, decision problems often involve reasoning over sets of items, so we need deep networks that reflect this structure. The first part of this thesis develops a deep network layer that reflects this structural assumption, and shows that the resulting layer is maximally expressive among parameter tying schemes. We then evaluate deep network architectures composed of these layers on a variety of decision problems from human decision making in a game theory setting, to algorithmic decision making on propositional satisfiability problems. The second challenge is that predicting the effect of policy changes involves reasoning about shifts in distribution: any policy change will, by definition, change the conditions under which decision makers operate. This violates the standard machine learning assumption that models will be evaluated under the same conditions as those under which they were trained (the ``independent and identically distributed'' data assumption). The second part of this thesis shows how we can train deep networks that make valid predictions of the results of such policy interventions, by adapting the classical causal inference method of instrumental variables. Finally, we develop methods that are robust to some violations of the instrumental variable assumptions in settings with multiple instrumental variables.

View record

Modeling Human Behavior in Strategic Settings (2016)

Increasingly, electronic interactions between individuals are mediated by specialized algorithms. One might hope to optimize the relevant algorithms for various objectives. An aspect of online platforms that complicates such optimization is that the interactions are often strategic: many agents are involved, all with their own distinct goals and priorities, and the outcomes for each agent depend both on their own actions, and upon the actions of the other agents. My thesis is that human behavior can be predicted effectively in a wide range of strategic settings by a single model that synthesizes known deviations from economic rationality. In particular, I claim that such a model can predict human behavior better than the standard economic models. Economic mechanisms are currently designed under behavioral assumptions (i.e., full rationality) that are known to be unrealistic. A mechanism designed based on a more accurate model of behavior will be more able to achieve its goal. In the first part of the dissertation, we develop increasingly sophisticated data-driven models to predict human behavior in strategic settings. We begin by applying machine learning techniques to compare many existing models from behavioral game theory on a large body of experimental data. We then construct a new family of models called quantal cognitive hierarchy (QCH), which have even better predictive performance than the best of the existing models. We extend this model with a richer notion of nonstrategic behavior that takes into account features such as fairness, optimism, and pessimism, yielding further performance improvements. Finally, we perform some initial explorations into applying techniques from deep learning in order to automatically learn features of strategic settings that influence human behavior. A major motivation for modeling human strategic behavior is to improve the design of practical mechanisms for real-life settings. In the second part of the dissertation, we study an applied strategic setting (peer grading), beginning with an analysis of the question of how to optimally apply teaching assistant resources to incentivize students to grade each others' work accurately. We then report empirical results from using a variant of this system in a real-life undergraduate class.

View record

Performance Modelling and Automated Algorithm Design for NP-hard Problems (2015)

In practical applications, some important classes of problems are NP-complete. Although no worst-case polynomial time algorithm exists for solving them, state-of-the-art algorithms can solve very large problem instances quickly, and algorithm performance varies significantly across instances. In addition, such algorithms are rather complex and have largely resisted theoretical average-case analysis. Empirical studies are often the only practical means for understanding algorithms’ behavior and for comparing their performance.My thesis focuses on two types of research questions. On the science side, the thesis seeks a in better understanding of relations among problem instances, algorithm performance, and algorithm design. I propose many instance features/characteristics based on instance formulation, instance graph representations, as well as progress statistics from running some solvers. With such informative features, I show that solvers’ runtime can be predicted by predictive performance models with high accuracy. Perhaps more surprisingly, I demonstrate that the solution of NP-complete decision problems (e.g., whether a given propositional satisfiability problem instance is satisfiable) can also be predicted with high accuracy. On the engineering side, I propose three new automated techniques for achieving state-of-the-art performance in solving NP-complete problems. In particular, I construct portfolio-based algorithm selectors that outperform any single solver on heterogeneous benchmarks. By adopting automated algorithm configuration, our highly parameterized local search solver, SATenstein-LS, achieves state-of- the-art performance across many different types of SAT benchmarks. Finally, I show that portfolio-based algorithm selection and automated algorithm configuration could be combined into an automated portfolio construction procedure. It requires significant less domain knowledge, and achieved similar or better performance than portfolio-based selectors based on known high-performance candidate solvers.The experimental results on many solvers and benchmarks demonstrate that the proposed prediction methods achieve high predictive accuracy for predicting algorithm performance as well as predicting solutions, while our automatically constructed solvers are state of the art for solving the propositional satisfiability problem (SAT) and the mixed integer programming problem (MIP). Overall, my research results in more than 8 publications including the 2010 IJCAI/JAIR best paper award. The portfolio-based algorithm selector, SATzilla, won 17 medals in the international SAT solver competitions from 2007 to 2012.

View record

The Positronic Economist: A Computational System for Analyzing Economic Mechanisms (2015)

A mechanism is a formal protocol for collective decision making among self-interested agents. Mechanisms model many important social processes from auctions to elections. They are also widely studied in computer science: the participants in real-world mechanisms are often autonomous software systems (e.g., algorithmic bidding and trading agents), and algorithmic problems (e.g., job scheduling) give rise to mechanisms when users have competing interests.Strategic behavior (or "gaming") poses a major obstacle to understanding mechanisms. Although real-world mechanisms are often fairly simple functions (consider, e.g., plurality voting), a mechanism's outcome depends on both this function and on agents' strategic choices. Game theory provides a principled means for analyzing such choices. Unfortunately, game theoretic analysis is a difficult process requiring either human effort or very large amounts of computation.My thesis is that mechanism analysis can be done computationally, due to recent advances in compact representations of games. Compact representation is possible when a game's description has a suitable independence structure. Exploiting this structure can exponentially decrease the space required to represent games, and exponentially decrease the time required to analyze them. The contributions of my thesis revolve around showing that the relevant kinds of structure (specifically, the structure exploited by action-graph games) are present in important mechanisms of interest. Specifically, I studied two major classes of mechanisms, position auctions (as used for internet advertising) and voting rules. In both cases, I was able to provide exponential improvements in the space and time complexity of analysis, and to use those improvements to address important open questions in the literature. I also introduced a new algorithm for analyzing action-graph games, with faster empirical performance and additional benefits over the previous state-of-the-art. Finally, I created the Positronic Economist system, which consists of a python-based descriptive language for mechanism games, with automatic discovery of computationally useful structure.

View record

Representing and reasoning with large games (2012)

In the last decade, there has been much research at the interface of computer science and game theory. One important class of problems at this interface is the computation of solutionconcepts (such as Nash equilibrium or correlated equilibrium) of a finite game. In order to take advantage of the highly-structured utility functions in games of practical interest, itis important to design compact representations of games as well as efficient algorithms for computing solution concepts on such representations. In this thesis I present several novel contributions in this direction:The design and analysis of Action-Graph Games (AGGs), a fully-expressive modeling language for representing simultaneous-move games.We propose a polynomial-time algorithm for computing expected utilities given arbitrary mixed strategy profiles, and leverage the algorithm to achieve exponential speedups of existing algorithms for computing Nash equilibria. Designing efficient algorithms for computing pure-strategy Nash equilibria in AGGs. For symmetric AGGs with bounded treewidth our algorithm runs in polynomial time.Extending the AGG framework beyond simultaneous-move games. We propose Temporal Action-Graph Games (TAGGs) for representing dynamic games and Bayesian Action-Graph Games (BAGGs) for representing Bayesian games. For certain subclasses of TAGGs and BAGGs we gave efficient algorithms for equilibria that achieve exponential speedups over existing approaches.Efficient computation of correlated equilibria. In a landmark paper, Papadimitriou and Roughgarden described a polynomial-time algorithm ("Ellipsoid Against Hope") for computing sample correlated equilibria of compactly-represented games. Recently, Stein, Parrilo and Ozdaglar showed that this algorithm can fail to find an exact correlated equilibrium. We present a variant of the Ellipsoid Against Hope algorithm that guarantees the polynomial-time identification of exact correlated equilibrium.Efficient computation of optimal correlated equilibria. We show that the polynomial-time solvability of what we call the deviation-adjusted social welfare problem is a sufficient condition for the tractability of the optimal correlated equilibrium problem.

View record

Master's Student Supervision

Theses completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest theses.

Pay to (not) play: monetizing impatience in mobile games (2024)

Mobile gaming is a rapidly growing and incredibly profitable sector; having grown seven-fold over the past 10 years, it now grosses over $100 billion annually. This growth was due in large part to a shift in monetization strategies: rather than charging players an upfront cost ("pay-to-play"), games often request optional microtransactions throughout gameplay ("free-to-play"). We focus on a common scenario in which games include wait times---gating either items or game progression---that players can pay to skip. Game designers typically say that they optimize for player happiness rather than revenue; however, prices for skips are typically set at levels that few players are willing to pay, leading to low purchase rates. Under a traditional analysis, it would seem that game designers fail at their stated goal if few players buy what they are selling. We argue that an alternate model can better explain this dynamic: players value tasks more highly as they are perceived to be more difficult. While skips can increase players' utilities by providing instant gratification, pricing skips too cheaply can lower players' utilities by decreasing the perceived amount of work needed to complete a task. We show that high revenue, high player utility, and low purchase rates can all coexist under this model, particularly under a realistic distribution of players having few buyers but a few big-spending "whales." We also investigate how a game designer should optimize prices under our model.

View record

Allocation for social good:auditing mechanisms for utility maximization (2019)

We consider the problem of a nonprofit organization (“center”) that must divide re-sources among subsidiaries (“agents”), based on agents’ reported demand forecasts,with the aim of maximizing social good (agents’ valuations for the allocation minus any payments that are imposed on them). We investigate the impact of a common feature of the nonprofit setting: the center’s ability to audit agents who receive allocations – comparing their actual consumption with their reported forecasts. We show that auditing increases the power of mechanisms for utility maximization,both in unit-demand settings and beyond. In unit-demand settings, we consider both constraining ourselves to an allocation function studied in past work and allowing the allocation function to vary; beyond unit demand, we adopt the Vickrey-Clarke-Groves (VCG) allocation but modify the payment rule. Our ultimate goal is to show how to leverage auditing mechanisms to maximize utility in repeated allocation problems where payments are not possible; we show how any static auditing mechanism can be transformed to operate in such a setting, using the threat of reduced future allocations in place of monetary payments.

View record

Efficient feasibility checking in reverse clock auctions for radio spectrum (2017)

We investigate the problem of building a feasibility checker to repack stations in thereverse auction part of the FCC’s ongoing, multi-billion-dollar “incentive auction”.In this work, we describe the design of a feasibility checker, SATFC, that has beenadopted by the FCC for use in the incentive auction. We also construct a reverseauction simulator both in order to evaluate SATFC and also to gain insight into howthe performance of the feasibility checker impacts the overall cost and efficiency ofthe auction. Through running simulations that differ only in the feasibility checkerused, we show that the feasibility checker has a significant impact on auction costand efficiency.

View record

Deep learning for predicting human strategic behavior (2016)

Predicting the behavior of human participants in strategic settings is an important problem for applications that rely on game theoretic reasoning to design mechanisms or allocate resources. Most existing work either assumes that participants are perfectly rational, or attempts to directly model each participant's cognitive processes based on insights from cognitive psychology and experimental economics. In this work, we present an alternative, a deep learning-based approach that automatically performs cognitive modeling without relying on such expert knowledge. We introduce a novel architecture that allows a single network to generalize across normal form games with varying numbers of actions. We show that the architecture generalists the most successful existing models and that its performance significantly improves upon that of the previous state of the art, which relies on expert-constructed features.

View record

Auto-WEKA: Combined selection and hyperparameter optimization of supervised machine learning algorithms (2014)

Many different machine learning algorithms exist; taking into account each algorithm's set of hyperparameters, there is a staggeringly large number of possible choices. This project considers the problem of simultaneously selecting a learning algorithm and setting its hyperparameters. Previous works attack these issues separately, but this problem can be addressed by a fully automated approach, in particular by leveraging recent innovations in Bayesian optimization. The WEKA software package provides an implementation for a number of feature selection and supervised machine learning algorithms, which we use inside our automated tool, Auto-WEKA. Specifically, we examined the 3 search and 8 evaluator methods for feature selection, as well as all of the classification and regression methods, spanning 2 ensemble methods, 10 meta-methods, 27 base algorithms, and their associated hyperparameters. On 34 popular datasets from the UCI repository, the Delve repository, the KDD Cup 09, variants of the MNIST dataset and CIFAR-10, our method produces classification and regression performance often much better than obtained using state-of-the-art algorithm selection and hyperparameter optimization methods from the literature. Using this integrated approach, users can more effectively identify not only the best machine learning algorithm, but also the corresponding hyperparameter settings and feature selection methods appropriate for that algorithm, and hence achieve improved performance for their specific classification or regression task.

View record

Beyond equilibrium: predicting human behaviour in normal form games (2010)

It is standard in multiagent settings to assume that agents will adopt Nash equilibrium strategies. However, studies in experimental economics demonstrate that Nash equilibrium is a poor description of human players' actual behaviour. In this study, we consider a wide range of widely-studied models from behavioural game theory. For what we believe is the first time, we evaluate each of these models in a meta-analysis, taking as our data set large-scale and publicly-available experimental data from the literature. We then propose a modified model that we believe is more suitable for practical prediction of human behaviour.

View record

Publications

 
 

If this is your researcher profile you can log in to the Faculty & Staff portal to update your details and provide recruitment preferences.

 
 

Get key application advice, hear about the latest research opportunities and keep up with the latest news from UBC's graduate programs.