Robert Xiao

Assistant Professor

Research Interests

human-computer interaction
Virtual/augmented reality

Relevant Degree Programs


Graduate Student Supervision

Master's Student Supervision (2010 - 2021)
Designing familiar augmented and virtual reality environments and interactions through off-the-shelf real-world solutions (2021)

Immersive augmented reality systems allow users to superimpose digital content on their physical environments while immersive virtual reality systems simulate virtual spaces that make use of users' knowledge of the physical world. However, in such novel interactive systems, user unfamiliarity can break the illusion of seamless immersion and hinder adoption. We propose to bridge this gap by introducing familiar interactions and environments into immersive augmented and virtual reality.First, to explore familiar interactions, we propose using smartphones, which are familiar and readily available, as companion devices complementing existing interaction modalities for augmented reality head-mounted displays. Current-generation devices primarily support interaction modalities such as gesture, gaze and voice, which are less familiar as interaction modalities and lack precision and tactility, rendering them fatiguing for extended interactions. We leverage user familiarity with smartphone interactions, coupled with their support for precise, tactile touch input, to unlock a broad range of interaction techniques. For instance, by combining the phone's spatial position and orientation and high-resolution touchscreen within an augmented reality environment, we transform the ordinary smartphone into an interior design palette, a rapier or a bow. We describe a prototype implementation and report on the results of a positional accuracy study.Second, we believe that bringing familiar real world environments into virtual spaces can help create immersive and interactive virtual reality experiences. To explore this, we have chosen the real-world library as an expedient pilot and introduce the Library of Apollo, a virtual-reality experience which aims to bring together the connectedness and navigability of digital collections and the familiar browsing experience of physical libraries. An online deployment of our system, together with user surveys and collected session data showed that our users showed a strong preference for our system for book discovery with 41/45 of users saying they are positively inclined to recommend it to a bookish friend.

View record

Learned acoustic reconstruction using synthetic aperture focusing (2021)

Navigating and sensing the world through echolocation in air is an innate ability in many animals for which analogous human technologies remain rudimentary. Many engineered approaches to acoustic reconstruction have been devised which typically require unwieldy equipment and a lengthy measurement process, and are largely not applicable in air or in everyday human environments. Recent learning-based approaches to single-emission in-air acoustic reconstruction use simplified hardware and an experimentally-acquired dataset of echoes and the geometry that produced them to train models to predict novel geometry from similar but previously-unheard echoes. However, these learned approaches use spatially-dense representations and attempt to predict an entire scene all at once. Doing so requires a tremendous abundance of training examples in order to learn a model that generalizes, which leaves these techniques vulnerable to over-fitting.We introduce an implicit representation for learned in-air acoustic reconstruction inspired by synthetic aperture focusing techniques. Our method trains a neural network to relate the coherency of multiple spatially-separated echo signals, after accounting for the expected time-of-flight along a straight-line path, to the presence or absence of an acoustically reflective object at any sampling location. Additionally, we use signed distance fields to represent geometric predictions which provide a better-behaved training signal and allow for efficient 3D rendering. Using acoustic wave simulation, we show that our method yields better generalization and behaves more intuitively than competing methods while requiring only a small fraction of the amount of training data.

View record

Rapid mold prototyping: creating complex 3D castables from 2D cuts (2020)

Designers, makers, and artists prototype physical products by iteratively ideating, modeling, and realizing them in a fast, exploratory manner. A popular method of bringing 3D designs to life is through casting. Casting is the process of pouring a material into a mold, such that once the material sets, the target object is created. Currently, the process of turning a digital design into a tangible product can be difficult. One reason for this is that building the mold - for example by 3D printing it - can take hours, slowing down the prototyping process.This can be particularly true when prototyping molds for casting interactive (sensate and actuated) or geometrically complex (curvy) objects.To this end, we developed two mold-making techniques intended to facilitate different, complementary needs for rapid prototyping. The first technique we introduce is Silicone I/O, a making method based on Computer Numerical Control (CNC) that enables the molding of sensate, actuated silicone devices. This method uses stacked laser-cut slices of wood bound together with molten wax in order to create cheap, accessible, one-time-use molds that are quick and easy to assemble. The Silicone I/O devices are pneumatically actuated using air channels created through lost-wax casting, and made sensate by mixing carbon fibre with silicone. The second technique that we describe is FoldMold, which allows curvy molds to be rapidly built out of paper and wax. This approach is based on “unfolding” a 3D object, cutting the 2D layout, and using papercraft techniques to reassemble the mold.Beyond the physical challenges of rapid mold-making, digitally designing mold patterns from 3D objects poses a bottleneck in the design process. We contribute the FoldMold Blender Add-on, a computational tool that turns 3D positives into CNC-ready papercraft mold patterns. This thesis contributes within two different broad approaches to increasing increasing speed in mold prototyping. The first method is by creating flat, laser-cuttable mold patterns, significantly speeding up the actual mold creation process. The second method is by automating mold design, off-loading much of the tedious design work to a computer software that can help a maker design a mold very quickly.

View record


Membership Status

Member of G+PS
View explanation of statuses

Program Affiliations


If this is your researcher profile you can log in to the Faculty & Staff portal to update your details and provide recruitment preferences.


Learn about our faculties, research and more than 300 programs in our 2022 Graduate Viewbook!