Robert Xiao

Assistant Professor

Research Interests

human-computer interaction
Virtual/augmented reality

Relevant Thesis-Based Degree Programs

 
 

Graduate Student Supervision

Master's Student Supervision

Theses completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest theses.

Practical ad hoc tangible interactions in augmented reality (2024)

By demonstrating the capability of creating enhanced immersive user experiences, Augmented Reality (AR) has received significant attention in recent years. However, the most common interaction methods with this new technology are still largely limited to in-air gesture, voice and gaze controls. Previous works have proven that involving haptic feedback by interacting with real objects can improve the users’ performance and reduce their workload. But they are still not mature enough to be adopted in commercial products. This thesis delves into this challenge, offering novel solutions to make tangible interactions in AR more practical and effective.We begin by addressing touch interactions on ad-hoc surfaces. The vision is to transform everyday, un-instrumented surfaces into expansive touch screens, enhancing AR's interactive domain. Existing systems, however, grapple with issues of touch detection accuracy and latency. By harnessing dynamic fingertip information, we introduce a cutting-edge machine learning approach that elevates touch accuracy and anticipates user interactions, significantly reducing response time.We also analyze the possibility of transforming real-world objects into passive controllers. Precise tracking of objects in a 3D space—represented by six degrees of freedom (DOF)—amplifies AR's spatial control potential. Yet, current methods either necessitate extraneous tracking markers, imposing undue burdens on users, or are too resource-intensive for real-time execution on devices like head-mounted displays. Addressing this, we present "TangiAR." This system identifies and tracks everyday objects in AR without auxiliary trackers. Its efficacy is evidenced through comprehensive tests, assessing its robustness under various challenges, such as occlusions. Moreover, we showcase its practical application, operating seamlessly on a standard HoloLens 2, underlining the immediacy of its integration potential.By bridging these tangible interaction gaps in AR, this thesis not only contributes to technical advancements but also underscores the broader implications for AR's future, emphasizing a more intuitive, responsive, and enriched user experience.

View record

How subtle design in video games impacts player experience: qualitative studies regarding two video game design features (2022)

Video games have the ability to grant engaging, interactive experiences to its players, evoking a wide range of emotional responses; from joy to sorrow, from elation to thoughtfulness. This experiential affordance of video games, coupled with their meteoric rise in popularity, has made it increasingly important to understand the ways in which video games can shape player emotions and feelings. In this thesis, We consider how games affect player experience by investigating the effects of two prevalent design features, performing qualitative studies to understand their basis for inclusion and consequential effects. Ultimately, we highlight how such features can be used to mould the game towards specific experiences. First, we consider Random Reward Mechanisms (RRMs) - video game reward systems that provide rewards probabilistically upon trigger conditions, such as completing gameplay tasks or exceeding a playtime quota. We investigate the relationship between RRM implementations and user experience by performing a video observation of existing RRM systems and conducting interviews with players. Our methods reveal insights into how factors such as the affordances of non-optimal rewards and the trade-off between luck and skill impact player perception and interaction with RRMs. We additionally investigate the relationship between audiovisual design decisions and player expectations for reward presentations. Finally, we apply our findings to propose design methodologies for creating engaging RRM systems.Secondly, we consider the use of choice in narrative-rich video games. Such games often provide players with opportunities to make choices at key points, generating malleability within the game world and its characters. We explore the types of choices that exist, how choices affect player experience, and how players make decisions when presented with choice. We first conduct interviews with game developers and perform a video observation of existing choices to develop an initial classification system. We then perform a series of interviews with players to understand how different choices impact the overall experience. Our findings reveal that choices influence player experience at several levels of meta-gameplay, having impacts on the game itself, the player-game relationship, and the player outside the game. Finally, we discuss the potential of choice in developing impactful virtual experiences.

View record

Designing familiar augmented and virtual reality environments and interactions through off-the-shelf real-world solutions (2021)

Immersive augmented reality systems allow users to superimpose digital content on their physical environments while immersive virtual reality systems simulate virtual spaces that make use of users' knowledge of the physical world. However, in such novel interactive systems, user unfamiliarity can break the illusion of seamless immersion and hinder adoption. We propose to bridge this gap by introducing familiar interactions and environments into immersive augmented and virtual reality.First, to explore familiar interactions, we propose using smartphones, which are familiar and readily available, as companion devices complementing existing interaction modalities for augmented reality head-mounted displays. Current-generation devices primarily support interaction modalities such as gesture, gaze and voice, which are less familiar as interaction modalities and lack precision and tactility, rendering them fatiguing for extended interactions. We leverage user familiarity with smartphone interactions, coupled with their support for precise, tactile touch input, to unlock a broad range of interaction techniques. For instance, by combining the phone's spatial position and orientation and high-resolution touchscreen within an augmented reality environment, we transform the ordinary smartphone into an interior design palette, a rapier or a bow. We describe a prototype implementation and report on the results of a positional accuracy study.Second, we believe that bringing familiar real world environments into virtual spaces can help create immersive and interactive virtual reality experiences. To explore this, we have chosen the real-world library as an expedient pilot and introduce the Library of Apollo, a virtual-reality experience which aims to bring together the connectedness and navigability of digital collections and the familiar browsing experience of physical libraries. An online deployment of our system, together with user surveys and collected session data showed that our users showed a strong preference for our system for book discovery with 41/45 of users saying they are positively inclined to recommend it to a bookish friend.

View record

Learned acoustic reconstruction using synthetic aperture focusing (2021)

Navigating and sensing the world through echolocation in air is an innate ability in many animals for which analogous human technologies remain rudimentary. Many engineered approaches to acoustic reconstruction have been devised which typically require unwieldy equipment and a lengthy measurement process, and are largely not applicable in air or in everyday human environments. Recent learning-based approaches to single-emission in-air acoustic reconstruction use simplified hardware and an experimentally-acquired dataset of echoes and the geometry that produced them to train models to predict novel geometry from similar but previously-unheard echoes. However, these learned approaches use spatially-dense representations and attempt to predict an entire scene all at once. Doing so requires a tremendous abundance of training examples in order to learn a model that generalizes, which leaves these techniques vulnerable to over-fitting.We introduce an implicit representation for learned in-air acoustic reconstruction inspired by synthetic aperture focusing techniques. Our method trains a neural network to relate the coherency of multiple spatially-separated echo signals, after accounting for the expected time-of-flight along a straight-line path, to the presence or absence of an acoustically reflective object at any sampling location. Additionally, we use signed distance fields to represent geometric predictions which provide a better-behaved training signal and allow for efficient 3D rendering. Using acoustic wave simulation, we show that our method yields better generalization and behaves more intuitively than competing methods while requiring only a small fraction of the amount of training data.

View record

Rapid mold prototyping: creating complex 3D castables from 2D cuts (2020)

Designers, makers, and artists prototype physical products by iteratively ideating, modeling, and realizing them in a fast, exploratory manner. A popular method of bringing 3D designs to life is through casting. Casting is the process of pouring a material into a mold, such that once the material sets, the target object is created. Currently, the process of turning a digital design into a tangible product can be difficult. One reason for this is that building the mold - for example by 3D printing it - can take hours, slowing down the prototyping process.This can be particularly true when prototyping molds for casting interactive (sensate and actuated) or geometrically complex (curvy) objects.To this end, we developed two mold-making techniques intended to facilitate different, complementary needs for rapid prototyping. The first technique we introduce is Silicone I/O, a making method based on Computer Numerical Control (CNC) that enables the molding of sensate, actuated silicone devices. This method uses stacked laser-cut slices of wood bound together with molten wax in order to create cheap, accessible, one-time-use molds that are quick and easy to assemble. The Silicone I/O devices are pneumatically actuated using air channels created through lost-wax casting, and made sensate by mixing carbon fibre with silicone. The second technique that we describe is FoldMold, which allows curvy molds to be rapidly built out of paper and wax. This approach is based on “unfolding” a 3D object, cutting the 2D layout, and using papercraft techniques to reassemble the mold.Beyond the physical challenges of rapid mold-making, digitally designing mold patterns from 3D objects poses a bottleneck in the design process. We contribute the FoldMold Blender Add-on, a computational tool that turns 3D positives into CNC-ready papercraft mold patterns. This thesis contributes within two different broad approaches to increasing increasing speed in mold prototyping. The first method is by creating flat, laser-cuttable mold patterns, significantly speeding up the actual mold creation process. The second method is by automating mold design, off-loading much of the tedious design work to a computer software that can help a maker design a mold very quickly.

View record

Current Students & Alumni

This is a small sample of students and/or alumni that have been supervised by this researcher. It is not meant as a comprehensive list.
 

Membership Status

Member of G+PS
View explanation of statuses

Program Affiliations

 

If this is your researcher profile you can log in to the Faculty & Staff portal to update your details and provide recruitment preferences.

 
 

Read tips on applying, reference letters, statement of interest, reaching out to prospective supervisors, interviews and more in our Application Guide!