Relevant Thesis-Based Degree Programs
Graduate Student Supervision
Master's Student Supervision
Theses completed in 2010 or later are listed below. Please note that there is a 6-12 month delay to add the latest theses.
Video games have the ability to grant engaging, interactive experiences to its players, evoking a wide range of emotional responses; from joy to sorrow, from elation to thoughtfulness. This experiential affordance of video games, coupled with their meteoric rise in popularity, has made it increasingly important to understand the ways in which video games can shape player emotions and feelings. In this thesis, We consider how games affect player experience by investigating the effects of two prevalent design features, performing qualitative studies to understand their basis for inclusion and consequential effects. Ultimately, we highlight how such features can be used to mould the game towards specific experiences. First, we consider Random Reward Mechanisms (RRMs) - video game reward systems that provide rewards probabilistically upon trigger conditions, such as completing gameplay tasks or exceeding a playtime quota. We investigate the relationship between RRM implementations and user experience by performing a video observation of existing RRM systems and conducting interviews with players. Our methods reveal insights into how factors such as the affordances of non-optimal rewards and the trade-off between luck and skill impact player perception and interaction with RRMs. We additionally investigate the relationship between audiovisual design decisions and player expectations for reward presentations. Finally, we apply our findings to propose design methodologies for creating engaging RRM systems.Secondly, we consider the use of choice in narrative-rich video games. Such games often provide players with opportunities to make choices at key points, generating malleability within the game world and its characters. We explore the types of choices that exist, how choices affect player experience, and how players make decisions when presented with choice. We first conduct interviews with game developers and perform a video observation of existing choices to develop an initial classification system. We then perform a series of interviews with players to understand how different choices impact the overall experience. Our findings reveal that choices influence player experience at several levels of meta-gameplay, having impacts on the game itself, the player-game relationship, and the player outside the game. Finally, we discuss the potential of choice in developing impactful virtual experiences.
Immersive augmented reality systems allow users to superimpose digital content on their physical environments while immersive virtual reality systems simulate virtual spaces that make use of users' knowledge of the physical world. However, in such novel interactive systems, user unfamiliarity can break the illusion of seamless immersion and hinder adoption. We propose to bridge this gap by introducing familiar interactions and environments into immersive augmented and virtual reality.First, to explore familiar interactions, we propose using smartphones, which are familiar and readily available, as companion devices complementing existing interaction modalities for augmented reality head-mounted displays. Current-generation devices primarily support interaction modalities such as gesture, gaze and voice, which are less familiar as interaction modalities and lack precision and tactility, rendering them fatiguing for extended interactions. We leverage user familiarity with smartphone interactions, coupled with their support for precise, tactile touch input, to unlock a broad range of interaction techniques. For instance, by combining the phone's spatial position and orientation and high-resolution touchscreen within an augmented reality environment, we transform the ordinary smartphone into an interior design palette, a rapier or a bow. We describe a prototype implementation and report on the results of a positional accuracy study.Second, we believe that bringing familiar real world environments into virtual spaces can help create immersive and interactive virtual reality experiences. To explore this, we have chosen the real-world library as an expedient pilot and introduce the Library of Apollo, a virtual-reality experience which aims to bring together the connectedness and navigability of digital collections and the familiar browsing experience of physical libraries. An online deployment of our system, together with user surveys and collected session data showed that our users showed a strong preference for our system for book discovery with 41/45 of users saying they are positively inclined to recommend it to a bookish friend.
Navigating and sensing the world through echolocation in air is an innate ability in many animals for which analogous human technologies remain rudimentary. Many engineered approaches to acoustic reconstruction have been devised which typically require unwieldy equipment and a lengthy measurement process, and are largely not applicable in air or in everyday human environments. Recent learning-based approaches to single-emission in-air acoustic reconstruction use simplified hardware and an experimentally-acquired dataset of echoes and the geometry that produced them to train models to predict novel geometry from similar but previously-unheard echoes. However, these learned approaches use spatially-dense representations and attempt to predict an entire scene all at once. Doing so requires a tremendous abundance of training examples in order to learn a model that generalizes, which leaves these techniques vulnerable to over-fitting.We introduce an implicit representation for learned in-air acoustic reconstruction inspired by synthetic aperture focusing techniques. Our method trains a neural network to relate the coherency of multiple spatially-separated echo signals, after accounting for the expected time-of-flight along a straight-line path, to the presence or absence of an acoustically reflective object at any sampling location. Additionally, we use signed distance fields to represent geometric predictions which provide a better-behaved training signal and allow for efficient 3D rendering. Using acoustic wave simulation, we show that our method yields better generalization and behaves more intuitively than competing methods while requiring only a small fraction of the amount of training data.
Designers, makers, and artists prototype physical products by iteratively ideating, modeling, and realizing them in a fast, exploratory manner. A popular method of bringing 3D designs to life is through casting. Casting is the process of pouring a material into a mold, such that once the material sets, the target object is created. Currently, the process of turning a digital design into a tangible product can be difficult. One reason for this is that building the mold - for example by 3D printing it - can take hours, slowing down the prototyping process.This can be particularly true when prototyping molds for casting interactive (sensate and actuated) or geometrically complex (curvy) objects.To this end, we developed two mold-making techniques intended to facilitate different, complementary needs for rapid prototyping. The first technique we introduce is Silicone I/O, a making method based on Computer Numerical Control (CNC) that enables the molding of sensate, actuated silicone devices. This method uses stacked laser-cut slices of wood bound together with molten wax in order to create cheap, accessible, one-time-use molds that are quick and easy to assemble. The Silicone I/O devices are pneumatically actuated using air channels created through lost-wax casting, and made sensate by mixing carbon fibre with silicone. The second technique that we describe is FoldMold, which allows curvy molds to be rapidly built out of paper and wax. This approach is based on “unfolding” a 3D object, cutting the 2D layout, and using papercraft techniques to reassemble the mold.Beyond the physical challenges of rapid mold-making, digitally designing mold patterns from 3D objects poses a bottleneck in the design process. We contribute the FoldMold Blender Add-on, a computational tool that turns 3D positives into CNC-ready papercraft mold patterns. This thesis contributes within two different broad approaches to increasing increasing speed in mold prototyping. The first method is by creating flat, laser-cuttable mold patterns, significantly speeding up the actual mold creation process. The second method is by automating mold design, off-loading much of the tedious design work to a computer software that can help a maker design a mold very quickly.