S Sidney Fels
Relevant Degree Programs
Graduate Student Supervision
Doctoral Student Supervision (2008-2018)
The thesis presents novel computer methods towards simulation of oropha-ryngeal swallowing. The anatomy and motion of the human upper airwaywas extracted from dynamic Computed Tomography (CT) data using a noveltool and workflow. A state-of-the-art SPH method is extended to accommo-date non-Newtonian materials in the extracted geometries. A preliminarynumerical experiment of six human oropharyngeal swallows using SmoothedParticle Hydrodynamics (SPH) demonstrates that the methods are robustand useful for simulation of oropharyngeal swallowing.The presence of saliva is well known to be important for mastication,swallowing, and overall oral health. However, clinical studies of patientswith hyposalivation are unable to isolate the effect of saliva from other con-founding factors. The simulation presented in this thesis examines fluidboluses under lubricated and non-lubricated boundary conditions. Upon comparison with medical image data, the experiments suggest that salivadoes not provide a significant lubricative effect on the bolus transit times,but it may serve to reduce residue and therefore improve overall swallowingefficacy. Our findings, while preliminary, corroborate with existing clinicalresearch that finds that groups with hyposalivation do not have significantlydifferent transit times with control groups, but that residue may be increased in the hyposalivation group.Previous studies using computer simulation of fluid flow in the orophar-ynx typically make use of simplified geometries. Our work uses dynamic320-row Area Detector Computed Tomography (ADCT) images as the ba-sis for the simulations, and therefore does not require simplifying geometricassumptions. Since the data are dynamic, motion trajectories are all sup-plied by the ADCT data, and extrapolation from 2D sources such as bi-planevideofluoroscopy is not required. Processing the image data required the de-velopment of a novel workflow based on a new tool, which we call BlendSeg.We utilize and extend Unified Semi-Analytic Wall (USAW) SPH methodsso that orophrayngeal swallowing simulations may be performed. Theseextensions include the simulation of non-Newtonian boluses, and moving3D boundaries. Partial validation of the extended USAW SPH method isperformed using canonical flows.
The oropharynx is involved in a number of complex neurological functions, such as chewing, swallowing, and speech. Disorders associated with these functions, if not treated properly, can dramatically reduce the quality of life for the sufferer. When tailored to individual patients, biomechanical models can augment the imaging data, to enable computer-assisted diagnosis and treatment planning. The present dissertation develops a framework for 3D, subject-specific biomechanical modeling and simulation of the oropharynx. Underlying data consists of magnetic resonance (MR) images, as well as audio signals, recorded while healthy speakers repeated specific phonetic utterances in time with a metronome. Based on this data, we perform simulations that demonstrate motor control commonalities and variations of the /s/ sound across speakers, in front and back vowel contexts. Results compare well with theories of speech motor control in predicting the primary muscles responsible for tongue protrusion/retraction, jaw advancement, and hyoid positioning, and in suggesting independent activation units along the genioglossus muscle. We augment the simulations with real-time acoustic synthesis to generate sound. Spectral analysis of resultant sounds vis-à-vis recorded audio signals reveals discrepancy in formant frequencies of the two. Experiments using 1D and 3D acoustical models demonstrate that such discrepancy arises from low resolution of MR images, generic parameter-tuning in acoustical models, and ambiguity in 1D vocal tract representation. Our models prove beneficial for vowel synthesis based on biomechanics derived from image data. Our modeling approach is designed for time-efficient creation of subject-specific models. We develop methods that streamline delineation of articulators from MR images and reduce expert interaction time significantly (≈ 5 mins per image volume for the tongue). Our approach also exploits muscular and joint information embedded in state-of-the-art generic models, while providing consistent mesh quality, and the affordances to adjust mesh resolution and muscle definitions.
In contemporary western cities, socialization often occurs in locations with a mix of public and private characteristics. Oldenburg defined these settings as “Third Places” because they provide a space of conviviality in between the privacy of home and the rigidity of work. Coffee shops and pubs are some of the prototypical Third Places providing a welcoming and neutral atmosphere for conversation that is essential to community development. Consumer computing and telecommunications have impacted how we socialize with each other and use Third Places. This brings about the question of how technology can support Third Places or if technology has a role at all in these settings.We propose an alternative paradigm called “Third-placeness” defined as a state of socialization, of which a Third Place is a physical embodiment. Third-placeness arises when information is uncensored, which minimizes inequalities and differences, and is characterized by low barriers to information access, regularity, lightheartedness and comfort. We identify aspects of Third-placeness and study how a particular type of technology, interactive public displays, could affect these aspects. Through our observations and lessons learned we identify social, public, and physical characteristics of interactive public displays that could support aspects of Third-placeness. Our research contributes a framework, the Sociality, Publicity and Physicality Framework, that organizes aspects and requirements of designing interactive public displays for Third-placeness. It also describes a way in which to communicate about these designs and a way such designs can be approached.
Watching and creating videos have become predominant parts of our daily lives. Video is becoming the norm for a wide range of purposes from entertainment, to training and education, marketing, and communication. Users go beyond just watching videos. They want to experience and interact with content across the different types of videos. As they do so, significant digital traces accumulate on the viewed videos which provide an important source of information for designing and developing tools for video viewing interfaces.This dissertation proposes the next generation video management interfacewhich creates video experiences that go beyond just pushing the play button. It uses how people view and interact with contemporary video to design strategies for future video interfaces. This has allowed the development of new tools for navigating and managing videos that can be easily integrated into existing systems.To help define some design guidelines for the video interface, a behaviouralanalysis of users’ video viewing actions (n = 19) was performed. The resultsdemonstrate that participants actively watch videos and most participants tend to skip parts of videos and re-watch specific portions from a video multiple times. Based on the findings, new fast navigation and management strategies are developed and validated in search tasks using a single-video history (n = 12), a video viewing summary (n = 10) and multiple-videos history (n = 10). Evaluation of results of the proposed tools show significant performance improvements over the state-of-the-practice methods. This indicates the value of users’ video viewing actions.Navigating other forms of videos, such as interactive videos, introduces another issue with the selection of interactive objects within videos to direct users to different portions of the video. Due to the time-based nature of the videos, these interactive objects are only visible for a certain duration of the video, which makes their activation difficult. To alleviate this problem a novel acquisition technique (Hold) is created, which temporally pauses the objects while the user interacts with the target. This technique has been integrated into a rich media interface (MediaDiver) which made such interaction possible for users.
Obstructive Sleep Apnea (OSA) is a syndrome in which the human Upper Airway (UA) collapses during sleep leading to frequent sleep disruption and inadequate air supply to the lungs. OSA involves Fluid-Structure Interaction (FSI) between a complex airflow regime and intricate mechanics of soft and hard tissue, causing large deformation of the complicated UA geometry. Numerical simulations provide a means for understanding this complex system, therefore, we develop a validated FSI simulation, composed of a 1D fluid model coupled with a 3D FEM solid solver (Artisynth), that is applied to a parameterized airway model providing a fast and versatile system for researching FSI in the UA.The 1D fluid model implements the limited pressure recovery model of Cancelli and Pedley  using a dynamic pressure recovery term, area function corrections allowing complete closure and reopening of fluid geometries, and discretization schemes providing robust behavior in highly-uneven geometries. The fluid model is validated against 3D fluid simulations in static geometries and simple dynamic geometries, and proves reliable for predicting bulk flow pressure. Validation of simulation methods in Artisynth is demonstrated by simulating the buckling, complete collapse, and reopening of elastic tubes under static pressure which compare well with experimental results.The FSI simulation is validated against experiments performed for a collapsible channel (a "2D" Starling resistor) designed to have geometry and characteristics similar to the UA. The observed FSI behaviors are described and compared for both experiment and simulation, providing a quantitative validation of the FSI simulation. The simulations and experiments agree quite well, exhibiting the same major FSI behaviors, similar progression from one behavior to another, and similar dynamic range.A parameterized UA model is designed for fast and consistent creation of geometries. Uniform pressure and dynamic flow FSI simulations are performed with this model for numerous parameters associated with OSA. Uniform pressure simulations compare well to clinical data. Dynamic flow results demonstrate airflow limitation and snoring oscillations. The simulations are fast, simulating 1 s of FSI in 30 minutes. This model is a powerful tool for understanding the complex mechanics of OSA.
Eating nutritious foods and being more physically active prevents significant illnesses such as cardiac disease, stroke, and diabetes. However, leading a healthy lifestyle remains elusive and obesity continues to increase in North America. We investigate how online social networks (OSN) can change health behaviour by blending theories from health behaviour and participation in OSNs, which allow us to design and evaluate an OSN through a user-centred design (UCD) process.We begin this research by reviewing existing theoretical models to obtain the determining factors for participation in OSNs and changing personal health behaviour. Through this review, we develop a conceptual framework, Appeal Belonging Commitment (ABC) Framework, which provides individual determinants (Appeal), social determinants (Belonging), and temporal consideration (Commitment) for participation in OSNs for health behaviour change.The ABC Framework is used in a UCD process to develop an OSN called VivoSpace. The framework is then utilized to evaluate each design to determine if VivoSpace is able to change the determinants for health behaviour change. The UCD process begins with an initial user inquiry using questionnaires to validate the determinants from the framework (n=104). These results are used to develop a paper prototype of VivoSpace, which is evaluated through interviews (N=11). These results are used to design a medium fidelity prototype for VivoSpace, which is tested in a laboratory through both direct and indirect methods (n=36). The final iteration of VivoSpace is a high fidelity prototype, which is evaluated in a field experiment with clinical and non-clinical participants from Canada and USA (n=32). The results reveal positive changes for the participants associated with a clinic in self-efficacy for eating healthy food and leading an active lifestyle, attitudes towards healthy behaviour, and in the stages of change for health behaviour. These results are further validated by evaluating changes in health behaviour, which reveal a positive change for the clinical group in physical activity and an increase in patient activation. The evaluation of the high fidelity prototype allow for a final iteration of the ABC Framework, and the development of design principles for an OSN for positive health behaviour change.
Biomechanical models provide a means to analyze movement and forces in highly complex anatomical systems. Models can be used to explain cause and effect in normal body function as well as in abnormal cases where underlying causes of dysfunction can be clarified. In addition, computer models can be used to simulate surgical changes to bone and muscle structure allowing for prediction of functional and aesthetic outcomes. This dissertation proposes a state-of-the-art model of coupled jaw-tongue-hyoid biomechanics for simulating combined jaw and tongue motor tasks, such as chewing, swallowing, and speaking. Simulation results demonstrate that mechanical coupling of tongue muscles acting on the jaw and jaw muscles acting on the tongue are significant and should be considered in orofacial modeling studies. Towards validation of the model, simulated tongue velocity and tongue-palate pressure are consistent with published measurements.Inverse simulation methods are also discussed along with the implementation of a technique to automatically compute muscle activations for tracking a target kinematic trajectory for coupled skeletal and soft-tissue models. Additional target parameters, such as dynamic constraint forces and stiffness, are included in the inverse formulation to control muscle activation predictions in redundant models. Simulation results for moving and deforming muscular-hydrostat models are consistent with published theoretical proposals. Also, muscle activations predicted for lateral jaw movement are consistent with published literature on jaw physiology.As an illustrative case study, models of segmental jaw surgery with and without reconstruction are developed. The models are used to simulate clinically observed functional deficits in movement and bite force production. The inverse simulation tools are used to predict muscle forces that could theoretically be used by a patient to compensate for functional deficits following jaw surgery. The modeling tools developed and demonstrated in this dissertation provide a foundation for future studies of orofacial function and biomedical applications in oral and maxillofacial surgery and treatment.
Vision researchers have created an incredible range of algorithms and systems to detect, track, recognize, and contextualize objects in a scene, using a myriad of internal models to represent their problem and solution. However in order to effectively make use of these algorithms sophisticated expert knowledge is required to understand and properly utilize the internal models used. Researchers must understand the vision task and the conditions surrounding their problem, and select an appropriate algorithm which will solve the problem most effectively under these constraints.Within this thesis we present a new taxonomy for the computer vision problem of image registration which organizes the field based on the conditions surrounding the problem. From this taxonomy we derive a model which can be used to describe both the conditions surrounding the problem, as well as the range of acceptable solutions. We then use this model to create testbenches which can directly compare image registration algorithms under specific conditions. A direct evaluation of the problem space allows us to interpret models, automatically selecting appropriate algorithms based on how well they perform on similar problems. This selection of an algorithm based on the conditions of the problem mimics the expert knowledge of vision researchers without requiring any knowledge of image registration algorithms. Further, the model identifies the dimensions of the problem space, allowing us to automatically detect different conditions.Extending beyond image registration, we propose a general framework of vision designed to make all vision tasks more accessible by providing a model of vision which allows for the description of what to do without requiring the specification of how the problem is solved. The description of the vision problem itself is represented in such a way that even non-vision experts can understand making the algorithms much more accessible and usable outside of the vision research community.
Interactive large displays offer exciting new opportunities for collaboration and work. Yet, their size will fundamentally change how users expect to use and engage with computer applications: a likely reality is that such displays will be used by multiple users for multiple simultaneous tasks. These expectations demand a new approach for application design beyond the conventional desktop application model, where applications are single-user, and intended to support a subset of user tasks. In this research, we develop such a framework based on the premise that large display applications should support transitions—users’ desires to shift between multiple tasks and activities. We build this framework from models of how traditional large surfaces such as whiteboards are used to facilitate multiple tasks—often simultaneously. Based on studies of users’ whiteboard use, we construct a classification scheme of users’ activities with whiteboards, and the role of whiteboards in supporting the transitions between these activities. From a study of meeting room activity, we then develop a classification for collocated activity around traditional surfaces. We further develop models of how users’ needs change during their use of large display applications, exploring two contexts: a digital tabletop application for focused collaboration, and a public large display. These studies reveal how users engage and disengage with one another during collaborative work, and the dynamic needs of bystanders. Next, we design and evaluate a prototype that supports transitions between tasks in a scheduling activity using viewing changes. The results demonstrate that users transition between related tasks during such activities, and that viewing changes can support these transitions. Finally, we describe a design space for supporting transitions in large display applications. Taken together, the findings of this research illustrate the fundamental need to develop a new framework for designing large display applications. This work provides a step in this direction by providing rationale and empirical evidence for supporting transitions in this framework. In so doing, it suggests that we realign designers’ efforts from the predominant desktop-centric model of application development, and instead to a model that engenders smooth transitions between multiple, related activities.
Master's Student Supervision (2010-2017)
The upper-airway complex is involved in a number of life-sustaining functions, such as swallowing, speech, breathing and chewing. Disorders associated with these functions can dramatically reduce the life quality of the suffers. Biomechanical modelling is a useful tool that can bridge the gap between the human knowledge and medical data.When tailored to individual patients, biomechanical models can augment the imaging data, to enable computer-assisted diagnosis and treatment planning. This thesis introduces a model-registration framework for creating subject-specific models of the upper-airway complex based on 3D medical images.Our framework adapts a state-of-art comprehensive biomechanical model of head and neck, which represents the generic upper-airway anatomy and function. By morphing this functional template to subject-specific data, we create upper-airway models for particular individuals. In order to preserve the functionality of the comprehensive model, we introduce a multi-structure registration technique, which can maintain the spatial relationship between the template components, and preserve the regularity of the underlying mesh structures. The functional information, such as the muscle attachment positions, joint positions and biomechanical properties, is updated to stay relevant to the subject-specific model geometry. We demonstrate the functionality of our subject-specific models in the biomechanical simulations.Two illustrative case studies are presented. First, we apply our modelling methods to simulating the normal swallowing motion of a particular subject based on the kinematics (of the airway boundary, jaw and hyoid) extracted from dynamic 3D CT images. The results suggest that our model tracks the oropharyngeal motion well, but has limited ability to reproduce the hyolaryngeal movements of normal swallowing. Second, we create two speaker-specific models based on 3D MR images, and perform personalized speech simulations of the utterance ageese. The models reproduce the speech motion of the tongue and jaw recorded in tagged and cine MRI data with sub-voxel tracking error, predict the muscular coordinating patterns of the speech motion.This study demonstrates the feasibility of using template-based subject-specific modelling methods to facilitate personalized analysis of upper-airway functions. The proposed model-registration framework provides a foundation for developing a systematic and advanced subject-specific modelling platform.
We are investigating the potential and the challenges of integrating eye gaze tracking support into the interface of ultrasound machines used for routine diagnostic scans by sonographers. In this thesis, we follow a user-centred approach by first conducting a field study to understand the context of the end user. As a starting point to a gaze-supported interface, we focus on the zoom functions of ultrasound machines. We study gaze-supported approaches for the often-used zoom function in ultrasound machines and present two alternatives, One-step Zoom (OZ) and Multi-step Zoom (MZ). A state-based analysis on the zoom functions in ultrasound machines is presented followed by a state-based representation of the gaze-supported alternatives. The gaze-supported state representation extends the manual-based interaction by implicitly integrating gaze input to OZ and offering a gaze-supported alternative to moving the zoom box in MZ. Evaluations of the proposed interactions through a series of user studies, seventeen non-sonographers and ten sonographers, suggest an increased cognitive demand and time on task compared to the conventional manual-based interaction. However, participants also reported an increased focus on main tasks using the gaze-supported alternative, which could offer benefit to novice users. They also report a lowered physical interaction as the gaze input replaces some functions of the manual input.
Speech is unique to human beings as a means of communication and many efforts have been made towards understanding and characterizing speech. In particular, articulatory speech synthesis is a critical field of study as it works towards simulating the fundamental physical phenomena that underlines speech. Of the various components that constitute an articulatory speech synthesizer, vocal fold models play an important role as the source of the acoustic simulation. A balance between the simplicity and speed of lumped-element vocal fold models and the completeness and complexity of continuum-models is required to achieve time-efficient high-quality speech synthesis. In addition, most models of the vocal folds are seen in a vacuum without any coupling to the vocal tract model. This thesis aims to fill these lacunae in the field through two major contributions. We develop and implement a novel self-oscillating vocal-fold model, composed of an 1D unsteady fluid model loosely coupled with a 2D finite-element structural model. The flow model is capable of handling irregular geometries, different boundary conditions, closure of the glottis and unsteady flow states. A method for a fast decoupled solution of the flow equations that does not require the computation of the Jacobian matrix is provided. The simulation results are shown to agree with existing data in literature, and give realistic glottal pressure-velocity distributions, glottal width and glottal flow values. In addition, the model is more than order of magnitude faster than comparable 2D Navier-Stokes fluid solvers while better capturing transitional flow than simple Bernoulli-based flow models.Secondly, as an illustrative case study, we implement a complete articulatory speech synthesizer using our vocal fold model. This includes both lumped-element and continuum vocal fold models, a 2D finite-difference time-domain solver of the vocal tract, and a 1D tracheal model. A clear work flow is established to derive model components from experimental data or user-specified meshes, and run fully-coupled acoustic simulations. This leads to one of the few complete articulatory speech synthesizers in literature and a valuable tool for speech research to run time-efficient speech simulations, and thoroughly study the acoustic outcomes of model formulations.
Smartphones today store large amounts of data that can be confidential, private or sensitive. To protect such data, all mobile OSs have a phone lock mechanism, a mechanism that requires user authentication in order to access applications or data on the phone, while also allowing to keep data-at-rest encrypted with encryption key dependent on the authentication secret. Recently Apple has introduced Touch ID feature that allows to use a fingerprint-based authentication to unlock an iPhone. The intuition behind such technology was that its usability would motivate users to use stronger passwords for locking their devices without sacrificing usability substantially. To this date, it is not clear, however, if users take an advantage of Touch ID technology and if they, indeed, employ stronger authentication secrets. It is the main objective and the contribution of this work to fill this knowledge gap. In order to answer this question we conducted three user studies (a) an in- person survey with 90 subjects, (b) an interview study with 21 participants, and (c) an online survey with 374 subjects. Overall we found that users do not take an advantage of Touch ID and use weak authentication secrets, mainly PIN-codes, similarly to those users who do not have Touch ID sensor on their devices. To our surprise, we found that more than 30% of subjects in each group did not know that they could use alphanumeric passwords instead of four digits PIN-codes. Others stated that they adopted PIN-codes due to better usability in comparison to passwords. Most of the subjects agreed that Touch ID, indeed, offers usability benefits such as convenience, speed and ease of use. Finally, we found that there is a disconnect between users desires for security that their passcodes have to offer and the reality. In particular, only 12% of participants correctly estimated the security PIN-codes provide while the rest had unjustified expectations.
We propose the use of a personal video navigation history, which records a user's viewing behaviour, as a basis for casual video editing and sharing. Our novel interaction supports users' navigation of previously-viewed intervals to construct new videos via simple playlists. The intervals in the history can be individually previewed and searched, filtered to identify frequently-viewed sections, and added to a playlist from which they can be refined and re-ordered to create new videos. Interval selection and playlist creation using a history-based interaction is compared to a more conventional filmstrip-based technique. We performed several user studies to evaluate the usability and performance of this method and found significant results indicating improvement in video interval search and selection.
Video annotation is a process of describing or elaborating on objects or events represented in video. Part of this process involves time consuming manual interactions to define spatio-temporal entities - such as a region of interest within the video.This dissertation proposes a pursuit method for video annotation to quickly define a particular type of spatio-temporal entity known as a point- based path. A pursuit method is particularly suited to annotation contexts when a precise bounding region is not needed, such as when annotators draw attention to objects in consumer video.We demonstrate the validity of the pursuit method with measurements of both accuracy and annotation time when annotators create point-based paths. Annotator tool designers can now chose a pursuit method for suitable annotation contexts.
In the process of working with a real-time, gesture controlled speech and singing synthesizer used for musical performance, we have documented performer related issues and provided some suggestions that will serve to improve future work in the field from an engineering and technician's perspective. One particular, significant detrimental factor in the existing system is the sound quality caused by the limitations of the one-to-one kinematic mapping between the gesture input and output. In order to solve this a force activated bio-mechanical mapping layer was implemented to drive an articulatory synthesizer, and the results were and compared with the existing mapping system for the same task from both the performer and listener perspective. The results show that adding the complex, dynamic bio-mechanical mapping layer introduces more difficulty but allows a greater degree of expression to the performer that is consistent with existing work in the literature. However, to the novice listener, there is no significant difference in the intelligibility of the sound or the perceived quality. The results suggest that for browsing through a vowel space force and position input are comparable when considering output intelligibility alone but for expressivity a complex input may be more suitable.
The ability to swallow is crucial in maintaining adequate nutrition. However, there is a high prevalence of dysphagia among the elderly and a high associated mortality rate. To study the various causes of the associated physiological changes, one must first understand the biomechanics of normal swallowing. However, functional studies of the anatomically complex head and neck region can prove to be difficult due to both technical and ethical reasons.To overcome the limitations of clinical studies, this thesis proposes the use of a 3D computer model for performing dynamic simulations. A state-of-the-art model of the hyolaryngeal complex was created for simulating swallowing-related motor tasks with a special focus on hyoid excursion since reduced hyoid motion is a major indicator of oropharyngeal dysphagia. The model was constructed using anatomical data for a male cadaver from the Visible Human Project and an open-source dynamic simulation platform, ArtiSynth.Hyoid motion data obtained from videofluoroscopy of subjects performing normal swallowing was applied to the model for inversely simulating the potential muscle activities of the extrinsic laryngeal muscles during hyoid excursion. Within a specific range, the model demonstrated the ability to reproduce realistic hyoid motion for swallowing. Selective usage of suprahyoid muscles was also examined and was found to be possible in achieving adequate hyoid excursion for successful swallows.Finally, this study investigated the relationship between muscle weakening and hyoid range of motion using the hyolaryngeal model. Loss of muscle strength is characteristic of the aging process. Simulation of the maximum hyoid displacement under various muscle conditions confirmed a nonlinear reduction in the hyoid motion range under a linear decline in muscle strength. With an assumed rate of muscle weakening, the proportion of hyoid range reduction was estimated for a person at various ages. The results suggest that severe muscle weakening might be required to reduce hyoid excursion sufficiently to impair swallowing to a significant degree.
With the rapid rise of the popularity of online social networks (OSNs) in recent years, we have seen tremendous growth in the number of available OSNs. With newer OSNs attempting to draw users in by focussing on specific services or themes, it is becoming clearer that OSNs do not compete on the quality of their technology but rather the number of active users. This leads to vendor lock-in, which creates problems for users managing multiple OSNs or wanting to switch OSNs. Third party applications are often written to alleviate these problems but often find it difficult to deal with the differences between OSNs. These problems are made worse as we argue that a user will inevitably switch between many OSNs in his or her lifetime due to OSNs being incredibly fashionable things whose lifespan is dependent on social trends. Thus, these applications often only support a limited number of OSNs. This thesis examines how it is possible to help developers write apps that run against multiple OSNs. It describes the need for and presents a novel set of abstractions for apps to use to interface with OSNs. These abstractions are highly expressive, future proof, and removes the need for an app to know which OSNs it is running against. Two evaluations were done to determine the strength of these abstractions. The first evaluation analyzed the expressiveness of the abstractions while the latter analyzed the feasibility of the abstractions. The contributions of this thesis are a first step to better understanding how OSNs can be described at a high level.
No abstract available.
In this thesis, we present the results of a user study that compares three different selection methods for moving targets in 1D and 2D space. The standard Chase-and-Click method involves pursuing an onscreen target with the mouse pointer and clicking on it once directly over it. The novel Click-to-Pause method involves first depressing the mouse button to pause all onscreen action, moving the cursor over the target and releasing the mouse button to select it. The Hybrid method combines the initial pursuit with the ability to pause the action by depressing the mouse button, affording an optimization of the point of interception. Our results show that the Click-to-Pause and Hybrid methods results in lower selection times than the Chase-and-Click method for small or fast targets, while the Click-to-Pause technique is the lowest overall for small-fast targets. We integrate the more practical Hybrid method into a multi-view video browser to enable the selection of hockey players in a pre-recorded hockey game. We demonstrate that the majority of correct player selections were performed while the video was paused and that our display method for extraneous information has no effect on selection task performance. We develop a kinematic model that is based on movement speed and direction in 1D as an adjustment to the effective width and distance of a target. Our studies show that target speed assists users when a target is approaching, up to a critical velocity where the direction is irrelevant and speed is entirely responsible for the index of difficulty. In addition, we suggest that existing linear and discrete models of human motor control are inadequate for modeling the selection of a moving target and recommend the minimum jerk law as a guide for measuring human motor acceleration. By combining our empirical results from moving target selection tasks in 1D with our theoretical model for motor control, we propose an extension to Fitts’ Law for moving targets in 2D polar space.