Michiel Van de Panne

Professor

Research Classification

Computer Science and Statistics
Computer Sciences and Mathematical Tools
Robotics and Automation

Research Interests

simulation of human movement
computer animation
Robotics
deep reinforcement learning
motor control
computer graphics

Relevant Degree Programs

 

Recruitment

Complete these steps before you reach out to a faculty member!

Check requirements
  • Familiarize yourself with program requirements. You want to learn as much as possible from the information available to you before you reach out to a faculty member. Be sure to visit the graduate degree program listing and program-specific websites.
  • Check whether the program requires you to seek commitment from a supervisor prior to submitting an application. For some programs this is an essential step while others match successful applicants with faculty members within the first year of study. This is either indicated in the program profile under "Requirements" or on the program website.
Focus your search
  • Identify specific faculty members who are conducting research in your specific area of interest.
  • Establish that your research interests align with the faculty member’s research interests.
    • Read up on the faculty members in the program and the research being conducted in the department.
    • Familiarize yourself with their work, read their recent publications and past theses/dissertations that they supervised. Be certain that their research is indeed what you are hoping to study.
Make a good impression
  • Compose an error-free and grammatically correct email addressed to your specifically targeted faculty member, and remember to use their correct titles.
    • Do not send non-specific, mass emails to everyone in the department hoping for a match.
    • Address the faculty members by name. Your contact should be genuine rather than generic.
  • Include a brief outline of your academic background, why you are interested in working with the faculty member, and what experience you could bring to the department. The supervision enquiry form guides you with targeted questions. Ensure to craft compelling answers to these questions.
  • Highlight your achievements and why you are a top student. Faculty members receive dozens of requests from prospective students and you may have less than 30 seconds to peek someone’s interest.
  • Demonstrate that you are familiar with their research:
    • Convey the specific ways you are a good fit for the program.
    • Convey the specific ways the program/lab/faculty member is a good fit for the research you are interested in/already conducting.
  • Be enthusiastic, but don’t overdo it.
Attend an information session

G+PS regularly provides virtual sessions that focus on admission requirements and procedures and tips how to improve your application.

 

Master's students
Doctoral students
Postdoctoral Fellows
Any time / year round

simulation of agile human motion, robotics, motion planning, sensorimotor control, dexterous manipulation, computer animation, machine learning, computer graphics, novel interfaces

I support experiential learning experiences, such as internships and work placements, for my graduate students and Postdocs.

Graduate Student Supervision

Doctoral Student Supervision (Jan 2008 - May 2019)
Topological modeling for vector graphics (2017)

In recent years, with the development of mobile phones, tablets, and web technologies, we have seen an ever-increasing need to generate vector graphics content, that is, resolution-independent images that support sharp rendering across all devices, as well as interactivity and animation. However, the tools and standards currently available to artists for authoring and distributing such vector graphics content have many limitations. Importantly, basic topological modeling, such as the ability to have several faces share a common edge, is largely absent from current vector graphics technologies. In this thesis, we address this issue with three major contributions.First, we develop theoretical foundations of vector graphics topology, grounded in algebraic topology. More specifically, we introduce the concept of Point-Curve-Surface complex (PCS complex) as a formal tool that allows us to interpret vector graphics illustrations as non-manifold, non-planar, non-orientable topological spaces immersed in R2, unlike planar maps which canonly represent embeddings.Second, based on this theoretical understanding, we introduce the vector graphics complex (VGC) as a simple data structure that supports fundamental topological modeling operations for vector graphics illustrations. It allows for the direct representation of incidence relationships between objects, while at the same time keeping the geometric flexibility of stacking-based systems,such as the ability to have edges and faces overlap each others.Third and last, based on the VGC, we introduce the vector animation complex (VAC), a data structure for vector graphics animation, designed to support the modeling of time-continuous topological events, which are common in 2D hand-drawn animation. This allows features of a connected drawing to merge, split, appear, or disappear at desired times via keyframes that introduce the desired topological change. Because the resulting space-time complex directlycaptures the time-varying topological structure, features are readily edited in both space and time in a way that reflects the intent of the drawing.

View record

Style exploration and generalization for character animation (2016)

Believable character animation arises from a well orchestrated performance by a digital character. Various techniques have been developed to help drive this performance in an effort to create believable character animations. However, automatic style exploration and generalization from motion data are still open problems. We tackle several different aspects of the motion generation problem which aim to advance the state of the art in the areas of style exploration and generalization. First, we describe a novel optimization framework that produces a diverse range of motions for physics-based characters for tasks such as jumps, flips, and walks. This stands in contrast to the more common use of optimization to produce a single optimal motion. The solutions can be optimized to achieve motion diversity or diversity in the proportions of the simulated characters. Exploration of style of task achievement for physics-based character animation can be performed automatically by exploiting ``null spaces'' defined by the task. Second, we perform automatic style generalization by generalizing a controller for varying degree of task achievement for a specified task. We describe an exploratory approach which explores trade-offs between competing objectives for a specified task. Pareto-optimality can be used to explore various degrees of task achievement for a given style of physics-based character animation. We describe our algorithms for computing a set of controllers that span the pareto-optimal front for jumping motions which explore the trade-off between effort and jump height. We also develop supernatural jump controllers through the optimized introduction of external forces. Third, we develop a data-driven approach to model sub-steps, such as, sliding foot pivots and foot shuffling. These sub-steps are often an integral component of the style observed in task-specific locomotion. We present a model for generating these sub-steps via a foot step planning algorithm which is then used to generate full body motion. The system is able to generalize the style observed in task-specific locomotion to novel scenarios.

View record

Real-time planning and control for simulated bipedal locomotion (2011)

Understanding and reproducing the processes that give rise to purposeful human and animal motions has long been of interest in the fields of character animation, robotics and biomechanics. However, despite the grace and agility with which many living creatures effortlessly perform skilled motions, modeling motor control has proven to be a difficult problem. Building on recent advances, this thesis presents several approaches to creating control policies that allow physically-simulated characters to demonstrate skill and purpose as they interact with their virtual environments.We begin by introducing a synthesis-analysis-synthesis framework that enables physically-simulated characters to navigate environments with significant stepping constraints. First, an offline optimization method is used to compute control solutions for randomly-generated example problems. Second, the example motions and their underlying control patterns are analyzed to build a low-dimensional step-to-step model of the dynamics. Third, the dynamics model is exploited by a planner to solve new instances of the task in real-time. We then present a method for precomputing robust task-based control policies for physically simulated characters. This allows our characters to complete higher-level locomotion tasks, such as walking in a user specified direction, while interacting with the environment in significant ways. As input, the method assumes an abstract action vocabulary consisting of balance-aware locomotion controllers. A constrained state exploration phase is first used to define a dynamics model as well as a finite volume of character states over which the control policy will be defined. An optimized control policy is then computed using reinforcement learning.Lastly, we describe a control strategy for walking that generalizes well across gait parameters, motion styles, character proportions, and a variety of skills. The control requires no character-specific or motion-specific tuning, is robust to disturbances, and is simple to compute. The method integrates tracking using proportional-derivative control, foot placement adjustments using an inverted pendulum model and Jacobian transpose control for gravity compensation and fine-level velocity tuning. We demonstrate a variety of walking-related skills such as picking up objects placed at any height, lifting, pulling, pushing and walking with heavy crates, ducking over and stepping under obstacles and climbing stairs.

View record

Master's Student Supervision (2010 - 2018)
Data driven auto-completion for keyframe animation (2018)

Keyframing is the main method used by animators to choreograph appealing motions,but the process is tedious and labor-intensive. In this thesis, we present adata-driven autocompletion method for synthesizing animated motions from inputkeyframes. Our model uses an autoregressive two-layer recurrent neural networkthat is conditioned on target keyframes. Given a set of desired keys, the trainedmodel is capable of generating a interpolating motion sequence that follows thestyle of the examples observed in the training corpus.We apply our approach to the task of animating a hopping lamp character andproduce a rich and varied set of novel hopping motions using a diverse set of hopsfrom a physics-based model as training data. We discuss the strengths and weaknessesof this type of approach in some detail.

View record

Developing locomotion skills with deep reinforcement learning (2017)

While physics-based models for passive phenomena such as cloth and fluids have been widely adopted in computer animation, physics-based character simulation remains a challenging problem. One of the major hurdles for character simulation is that of control, the modeling of a character's behaviour in response to its goals and environment. This challenge is further compounded by the high-dimensional and complex dynamics that often arise from these systems. A popular approach to mitigating these challenges is to build reduced models that capture important properties for a particular task. These models often leverage significant human insight, and may nonetheless overlook important information. In this thesis, we explore the application of deep reinforcement learning (DeepRL) to develop control policies that operate directly using high-dimensional low-level representations, thereby reducing the need for manual feature engineering and enabling characters to perform more challenging tasks in complex environments.We start by presenting a DeepRL framework for developing policies that allow character to agilely traverse across irregular terrain. The policies are represented using a mixture of experts model, which selects from a small collection of parameterized controllers. Our method is demonstrated on planar characters of varying morphologies and different classes of terrain. Through the learning process, the networks develop the appropriate strategies for traveling across various irregular environments without requiring extensive feature engineering. Next, we explore the effects of different action parameterizations on the performance of RL policies. We compare policies trained using low-level actions, such as torques, target velocities, target angles, and muscle activations. Performance is evaluated using a motion imitation benchmark. For our particular task, the choice of higher-level actions that incorporate local feedback, such as target angles, leads to significant improvements in performance and learning speed. Finally, we describe a hierarchical reinforcement learning framework for controlling the motion of a simulated 3D biped. By training each level of the hierarchy to operate at different spatial and temporal scales, the character is able to perform a variety of locomotion tasks that require a balance between short-term and long-term planning. Some of the tasks include soccer dribbling, path following, and navigation across dynamic obstacles.

View record

Embodied perception during walking using Deep Recurrent Neural Networks (2017)

Movements such as walking require knowledge of the environment in order to be robust. This knowledge can be gleaned via embodied perception. While information about the upcoming terrain such as compliance, friction, or slope may be difficult to directly estimate, using the walking motion itself allows for these properties to be implicitly observed over time from the stream of movement data. However, the relationship between a parameter such as ground compliance and the movement data may be complex and difficult to discover. In this thesis, we demonstrate the use of a Deep LSTM Network to estimate slope and ground compliance of terrain by observing a stream of sensory information that includes the character state and foot pressure information.

View record

Design and integration of controllers for simulated characters (2015)

Developing motions for simulated humanoids remains a challenging problem. While there exists a multitude of approaches, few of these are reimplemented or reused by others. The predominant focus of papers in the area remains on algorithmic novelty, due to the difficulty and lack of incentive to more fully explore what can be accomplished within the scope of existing methodologies. We develop a language, based on common features found across physics based character animation research, that facilitates the controller authoring process. By specifying motion primitives over a number of phases, our language has been used to design over 25 controllers for motions ranging from simple static balanced poses, to highly dynamic stunts. Controller sequencing is supported in two ways. Naive integration of controllers is achieved by using highly stable pose controllers (such as a standing or squatting) as intermediate transitions. More complex controller connections are automatically learned through an optimization process. The robustness of our system is demonstrated via random walkthroughs of our integrated set of controllers.

View record

Design and optimization of control primitives for simulated characters (2014)

Physics-based character motion has the potential of achieving realistic motionswithout laborious work from artists and without needing to use motion capturedata. It has potential applications in film, games and humanoid robotics. However,designing a controller for physics motions is a difficult task. It requires expertisein software engineering and understanding of control methods. Researchers typicallydevelop their own dedicated software framework and invent their own setsof control rules to control physics-based characters. This creates an impedimentto the non-expert who wants to create interesting motions and others who want toshare and revise motions. In this thesis, we demonstrate that a set of motion primitivesthat have been developed in recent years constitute effective building blocksfor authoring physics-based character motions. These motion primitives are madeaccessible using an expressive and flexible motion scripting language. The motionlanguage allows a motion designer to create controllers in a text file that can beloaded at runtime. This is intended to simplify motion design, debugging, understandingand sharing. We use this framework to create several interesting 2D planarmotions. An optimization framework is integrated that allows the hand-designedmotion controller to be optimized for more interesting behaviors, such as a fastprone-to-standing motion.We also develop a state-action compatibility model for adaping controllers tonew situations. The state-action compatibility model maintains a hypervolume ofcompatible states (“situations”) and actions (controllers). It allows queries for compatibleactions given a state.

View record

Exploring structured predictions from sensorimotor data during non-prehensile manipulation using both simulations and robots (2014)

Robots are equipped with an increasingly wide array of sensors in order to enable advanced sensorimotor capabilities. However, the efficient exploitation of theresulting data streams remains an open problem. We present a framework for learning when and where to attend in a sensorimotor stream in order to estimate specifictask properties, such as the mass of an object. We also identify the qualitativesimilarity of this ability between simulation and robotic system. The framework isevaluated for a non-prehensile ”topple-and-slide” task, where the data from a setof sensorimotor streams are used to predict the task property, such as object mass,friction coefficient, and compliance of the block being manipulated. Given the collected data streams for situations where the block properties are known, the methodcombines the use of variance-based feature selection and partial least-squares estimation in order to build a robust predictive model for the block properties. Thismodel can then be used to make accurate predictions during a new manipulation.We demonstrate results for both simulation and robotic system using up to 110sensorimotor data streams, which include joint torques, wrist forces/torques, andtactile information. The results show that task properties such as object mass,friction coefficient and compliance can be estimated with good accuracy from thesensorimotor streams observed during a manipulation.

View record

Real-time predictions from unlabeled high-dimensional sensory data during non-prehensile manipulation (2014)

Robots can be readily equipped with sensors that span a growing range of modalities and price-points. However, as sensors increase in number and variety, making the best use of the rich multi-modal sensory streams becomes increasingly challenging. In this thesis, we demonstrate the ability to make efficient and accurate task-relevant predictions from unlabeled streams of sensory data for a non-prehensile manipulation task. Specifically, we address the problem of making real-time predictions of the mass, friction coefficient, and compliance of a block during a topple-slide task, using an unlabeled mix of 1650 features composed of pose, velocity, force, torque, and tactile sensor data samples taken during the motion. Our framework employs a partial least squares (PLS) estimator as computed based on training data. Importantly, we show that the PLS predictions can be made significantly more accurate and robust to noise with the use of a feature selection heuristic, the task variance ratio, while using as few as 5% of the original sensory features. This aggressive feature selection further allows for reduced bandwidth when streaming sensory data and reduced computational costs of the predictions. We also demonstrate the ability to make online predictions based on the sensory information received to date. We compare PLS to other regression methods, such as principal components regression. Our methods are tested on a WAM manipulator equipped with either a spherical probe or a BarrettHand with arrays of tactile sensors.

View record

Reinforcement learning using sensorimotor traces (2014)

The skilled motions of humans and animals are the result of learning good solutionsto difficult sensorimotor control problems. This thesis explores new modelsfor using reinforcement learning to acquire motion skills, with potential applicationsto computer animation and robotics. Reinforcement learning offers a principledmethodology for tackling control problems. However, it is difficult to applyin high-dimensional settings, such as the ones that we wish to explore, where thebody can have many degrees of freedom, the environment can have significantcomplexity, and there can be further redundancies that exist in the sensory representationsthat are available to perceive the state of the body and the environment.In this context, challenges to overcome include: a state space that cannot be fullyexplored; the need to model how the state of the body and the perceived state ofthe environment evolve together over time; and solutions that can work with onlya small number of sensorimotor experiences.Our contribution is a reinforcement learning method that implicitly representsthe current state of the body and the environment using sensorimotor traces. Adistance metric is defined between the ongoing sensorimotor trace and previouslyexperienced sensorimotor traces and this is used to model the current state as aweighted mixture of past experiences. Sensorimotor traces play multiple roles inour method: they provide an embodied representation of the state (and thereforealso the value function and the optimal actions), and they provide an embodiedmodel of the system dynamics.In our implementation, we focus specifically on learning steering behaviors fora vehicle driving along straight roads, winding roads, and through intersections.The vehicle is equipped with a set of distance sensors. We apply value-iteration using off-policy experiences in order to produce control policies capable of steeringthe vehicle in a wide range of circumstances. An experimental analysis is providedof the effect of various design choices.In the future we expect that similar ideas can be applied to other high-dimensionalsystems, such as bipedal systems that are capable of walking over variable terrain,also driven by control policies based on sensorimotor traces.

View record

Modeling standing, walking and rolling skills for physics-based character animation (2013)

Physics-based character simulation is an important open problem with potential applications in robotics and biomechanics and computer animation for films and games. In this thesis we develop controllers for the real-time simulation of several motion skills, including standing balance, walking, forward rolling, and lateral rolling on the ground. These controllers are constructed from a common set of components. We demonstrate that the combination of a suitable vocabulary of components and optimization has the potential to model a variety of skills.

View record

Improvisational interfaces for visualization construction and scalar function sketching (2012)

Presentations are an important aspect of daily communication in most organizations. As sketch, and gesture-capable interfaces such as tablets and smart boards become increasingly common, they open up new possibilities for interacting with presentations. This thesis explores two new interface prototypes to improve upon otherwise tedious presentation needs such as demonstrating models based on scalar functions, and visualization of data. We combine a spreadsheet style interface with sketching of scalar mathematical functions to develop and demonstrate intuitive mathematical models without the need of coding or complex equations. We also explore sketch and gesture based creation of data visualizations.

View record

The animation canvas : a sketch-based visual language for motion editing (2012)

We propose the Animation Canvas, a system for working with character animation. The canvas is an interactive two-dimensional environment similar to a sketch editor. Abstract interaction modes and controls are provided to support editing tasks. Consistent motion-as-curve and pose-as-point metaphors unify different features of the system. The metaphors and interactive elements of the system define a visual language allowing users to explore, manipulate, and create motions.The canvas also serves as a framework for presenting interactive motion editing techniques. We have developed two techniques in order to explore possibilities for motion editing while demonstrating the flexibility of the system. The first technique is a method for interacting with motion graphs in order to explore motion connectivity and construct new blended motions from shorter clips. The second is a real-time spatial interpolation system that enables users to construct new motions or control an animated character.

View record

Learning reduced order linear feedback policies for motion skills (2011)

Skilled character motions need to adapt to their circumstances and this is typically accomplished with the use of feedback. However, good feedback strategies are difficult to author and this has been a major stumbling block in the development of physics-based animated characters. In this thesis we present a framework for the automated design of compact linear feedback strategies. We show that this can be an effective substitute for manually-designed abstract models such as the use of inverted pendulums for the control of simulated walking. Results are demonstrated for a variety of motion skills, including balancing, hopping, ball kicking, single-ball juggling, ball volleying, and bipedal walking. The framework uses policy search in the space of reduced-order linear feedback matrices as a means of developing an optimized linear feedback strategy. The generality of the method allows for the automated development of highly-effective unconventional feedback loops, such as the use of foot pressure feedback to achieve robust physics-based bipedal walking.

View record

Physics-based animation of primate locomotion (2011)

Quadrupedal animals commonly appear in films and video games, and their locomotion is of interests to several research and industrial communities. Because of the difficulty of handling and motion capturing these animals, physics-based animationis a promising method for synthesizing quadrupedal locomotion. In this thesis, we investigate control strategies for animating a gorilla model, as an example of primate quadrupeds.We review the state of the art in quadrupedal animation and robotics, and in particular a control framework designed for a simulated dog. We investigate the essential control strategy modifications as necessitated by the unique characteristics of gorilla morphology and locomotion style. We generate controllers for physically realistic walking and trotting gaits for a 3D gorilla model. We also rig a 3D mesh model of a gorilla with Maya, a commercial animation software. Gorilla gait motions are synthesized in our simulation using the rigged skeleton, and synthesized gaits are exported though a motion data pipeline back to Maya for rendering.

View record

Rising motion controllers for physically simulated characters (2011)

The control of physics-based simulated characters is an important open problem with potential applications in film, games, robotics, and biomechanics. While many methods have been developed for locomotion and quiescent stance, the problem of returning to a standing posture from a sitting or fallen posture has received much less attention. In this thesis, we develop controllers for biped sit-to-stand, quadruped getting-up, and biped prone-to-stand motions. These controllers are created from a shared set of simple components including pose tracking, root orientation correction, and virtual force based control. We also develop an optimization strategy that generates fast, dynamic rising motions from an initial statically stable motion. This strategy is also used to generalize controllers to sloped terrain and characters of varying size.

View record

Staged learning of agile motor skills (2011)

Motor learning lies at the heart of how humans and animals acquire their skills. Understanding of this process enables many benefits in Robotics, physics-based Computer Animation, and other areas of science and engineering. In this thesis, we develop a computational framework for learning of agile, integrated motor skills. Our algorithm draws inspiration from the process by which humans and animals acquire their skills in nature. Specifically, all skills are learned through a process of staged, incremental learning, during which progressively more complex skills are acquired and subsequently integrated with prior abilities. Accordingly, our learning algorithm is comprised of three phases. In the first phase, a few seed motions that accomplish goals of a skill are acquired. In the second phase, additional motions are collected through active exploration. Finally, the third phase generalizes from observations made in the second phase to yield a dynamics model that is relevant to the goals of a skill. We apply our learning algorithm to a simple, planar character in a physical simulation and learn a variety of integrated skills such as hopping, flipping, rolling, stopping, getting up and continuous acrobatic maneuvers. Aspects of each skill, such as length, height and speed of the motion can be interactively controlled through a user interface. Furthermore, we show that the algorithm can be used without modification to learn all skills for a whole family of parameterized characters of similar structure. Finally, we demonstrate that our approach also scales to a more complex quadruped character.

View record

Template-based sketch recognition using Hidden Markov Models (2011)

Sketch recognition is the process by which the objects in a hand-drawn diagramcan be recognized and identified. We provide a method to recognizeobjects in sketches by casting the problem in terms of searching for known2D template shapes in the sketch. The template is defined as an orderedpolyline and the recognition requires searching for a similarly-shaped sequentialpath through the line segments that comprise the sketch. The searchfor the best-matching path can be modeled using a Hidden Markov Model(HMM). We use an efficient dynamic programming method to evaluate theHMM with further optimizations based on the use of hand-drawn sketches.The technique we developed can cope with several issues that are commonto sketches such as small gaps and branching. We allow for objects with eitheropen or closed boundaries by allowing backtracking over the templates.We demonstrate the algorithm for a variety of templates and scanned drawings.We show that a likelihood score produced by the results can provide ameaningful measure of similarity to a template. An example-based methodis presented for setting a meaningful recognition threshold, which can allowfurther refinement of results when that template is used again. Limitationsof the algorithm and directions for future work are discussed.

View record

 
 

If this is your researcher profile you can log in to the Faculty & Staff portal to update your details and provide recruitment preferences.