Elizabeth Croft

Professor

Relevant Degree Programs

 

Graduate Student Supervision

Doctoral Student Supervision (Jan 2008 - May 2019)
Towards enabling human-robot handovers : exploring nonverbal cues for fluent human-robot handovers (2018)

Fundamental human to human interactions - sharing spaces, tools, handing over objects, carrying objects together - are part of the everyday experience; for most people, the task of handing over an object to another person is a natural and seemingly effortless task. However, in the context of human-robot interaction, smooth and seamless interaction is an open problem of fundamental interest for robotics designers, integrators and users alike. This thesis explores how nonverbal cues exhibited during robot giving and receiving behaviours change how users perceive the robot, and affect the handover task. Additionally, the work also investigates how robots can recognize and interpret expressions conveyed by a human giver to infer handover intent.Over the course of several user studies examining human-human and human-robot handovers, the role of nonverbal cues such as gaze and object orientation and how they may play a part in establishing fluency and efficiency of robot-to-human handovers are investigated. These studies provide insights into how robots can be trained through observation of human-to-human handovers. Furthermore, this thesis examines the role of nonverbal cues in the less-studied human-to-robot handover interaction. In this exploration, kinematic features from motion-captured skeleton models of a giver are used to establish the intent to handover, thereby enabling a robot to appropriately react and receive the object. Additionally, changing user perceptions, geometry and dynamics of human-to-robot handovers are explored through variation of initial pose, grasp type during, and retraction speed after handover of the robot receiver. Findings from this thesis demonstrate that nonverbal cues such as gaze and object orientation in the case of robot-to-human handovers, and kinodynamics during human-to-robot handovers, can significantly affect multiple aspects of the interaction including user perception, fluency, legibility, efficiency, geometry and fluidity of the handover. Using a machine learning approach, recognizing handover intent from nonverbal kinematics of the giver's pose could also be performed effectively. Thus, the work presented in this thesis indicates that nonverbal cues can serve as a powerful medium by which details of a handover can be subtly communicated to the human partner, resulting in a more natural experience in this ubiquitous, collaborative activity.

View record

Path-invariant and time-optimal motion control for industrial robots (2017)

This thesis presents practical methods for planning and control to improve the motion performance of industrial robots. Particular attention is given to the commercial six degrees-of-freedom articulated robot with a low-cost generic controller. A comparative study of motion control methods demonstrated that both smooth trajectory planning and filtering techniques, when combined with a traditional Proportional-Derivative control, are limited in achievable performance due to reduced accelerations (smooth trajectory) or large path-distortions (filtering technique). Instead, faster and more accurate motion is achieved with a low-order trajectory, namely, trapezoidal velocity profile, with feedforward control design based on an elastic model. The key component that makes the latter approach more appealing is the delay-free dynamic input shaper embedded in the feedforward control. Following the results from the comparative study, two innovations are proposed to satisfy the path-invariant and time-optimal motion. First, an online time-optimal trapezoidal velocity profile planned along multiple path segments is presented. The trajectory can be planned for arbitrary boundary conditions and path curvatures with only four system-dynamics computations per path segment. Next, a novel control method based on the flexible joint dynamic model is proposed to achieve high tracking performance for the proposed trajectory. The proposed nonlinear multivariable control can place the closed-loop poles arbitrarily with only position and velocity feedback. Real-world experiments with commercial industrial robots are carried out to validate the presented methods.

View record

What Should a Robot Do?: Design and Implementation of Human-like Hesitation Gestures as a Response Mechanism for Human-Robot Resource Conflicts (2017)

No abstract available.

Where did it go? : regaining a lost target for robot visual servoing (2016)

When a robotic visual servoing/tracking system loses sight of its target, the servo fails due to loss of input. To resolve this problem, a search method is required to generate efficient actions and bring the target back into the camera field of view (FoV) as soon as possible. For high dimensional platforms like a camera-mounted manipulator, an eye-in-hand system, such a search must address the difficult challenge of generating efficient actions in an online manner while avoiding visibility and kinematic constraints.This work considers two common scenarios of visual servoing/tracking failure, when the target leaves the camera FoV and when visual occlusions, occlusions for brevity, disrupt the process. To handle the first scenario, a novel algorithm called lost target search (LTS) is introduced to plan online efficient sensor actions. To handle the second scenario, an improved algorithm called lost target recovery algorithm (LTRA) allows a robot to look behind an occluder during active visual search and re-acquire its target in an online manner. Then the overall algorithm is implemented on a telepresence platform to evaluate the necessity and efficacy of autonomous occlusion handling for remote users. Occlusions can occur when users in remote locations are engaged in physical collaborative tasks. This can yield to frustration and inefficient collaboration between the collaborators. Therefore, two human-subjects experiments are conducted (N=20 and 36 respectively) to investigate the following interlinked research questions: a) what are the impacts of occlusion on telepresence collaborations, and b) can an autonomous handling of occlusions improve telepresence collaboration experience for remote users? Results from the first experiment demonstrate that occlusions introduce a significant social interference that necessitates collaborators to reorient or reposition themselves. Subsequently, results from the second experiment indicate that the use of an autonomous controller yields a remote user experience that is more comparable (in terms of their vocal non-verbal behaviors, task performance and perceived workload) to collaborations performed by two co-located parties. These contributions represent a step forward in making robots more autonomous and user friendly while interacting with human co-workers. This is a necessary next step for successful adoption of robots in human environments.

View record

Master's Student Supervision (2010 - 2018)
Haptic cues in bimanual cooperative transport of large objects (2018)

As the interest in, and demand for, personal and service robots increase, research studies in human-robot interaction, especially involving physical contact between robots and humans, have become increasingly important. As an example of physical human-robot interaction (pHRI) research, cooperative object transport has been substantially investigated by many researchers. However, there is a gap in studies on bimanual cooperative object transport, a carrying mode essential for transport of large objects.This research investigated human-human haptic interaction during cooperative bimanual transport of a large object. Eight pairs of human subjects, leader and follower dyads, were instructed to carry a large object with both hands and move cooperatively in the anteroposterior direction. The study focused on two haptic cues, the average rate of change of force (ARCF), and the interaction duration ($t_{interaction}$) employed by leaders and followers during the initiation phase of object transports. A custom-designed frame mounted with a load cell and an accelerometer was built to measure haptic interaction and transport movement in the anteroposterior direction. The experimental data showed that the leaders employed a repetitive ARCF during the initiation phase across trials. The other haptic cue $t_{interaction}$ for followers to respond to the leader’s initiation was also found to be repetitive. Modeling of the above-mentioned findings was conducted. The impedance model of a human arm during the initiation phase of bimanual cooperative transport was computed from the experimental data. Next, the expected interaction duration ($\hat{t}_{interaction}$) computed by inputting the average rate of change of force to the impedance model was compared to the average value of the actual interaction duration ($\bar{t}_{interaction}$) collected through the human-human study. The comparison showed that $\hat{t}_{interaction}$ was larger than $\bar{t}_{interaction}$ but in a comparable range. The findings of the human-human bimanual cooperative object transport study and subsequent modeling provide a basis for future development of a controller for human-robot cooperative transport of a large object.

View record

Data-driven design of expressive robot hands and hand gestures : applications for collaborative human-robot interaction (2017)

Fast and reliable communication between human workers and robotic assistants (RAs) is essential for successful collaboration between these agents. This is especially true for typically noisy manufacturing environments that render verbal communication less effective. This thesis investigates the efficacy of nonverbal communication capabilities of robotic manipulators that have poseable, three-fingered end-effectors (hands). This work explores the extent to which different poses of a typical robotic gripper can effectively communicate instructional messages during human-robot collaboration. Within the context of a collaborative car door assembly task, a series of three studies were conducted. Study 1 empirically explored the type of hand configurations that humans use to nonverbally instruct another per- son (N=17). Based on the findings from Study 1, Study 2 examined how well human gestures with frequently used hand configurations were under- stood by recipients of the message (N=140). Finally, Study 3 implemented the most human-recognized human hand configurations on a 7-degree-of- freedom (DOF) robotic manipulator to investigate the efficacy of having human-inspired hand poses on a robotic hand compared to an unposed hand (N=100).Contributions of this work include the presentation of a set of hand configurations humans commonly use to instruct another person in a collaborative assembly scenario, as well as Recognition Rate and Recognition Confidence measures for the gestures that humans and robots expressed using different hand configurations. These experimental results indicate that most gestures are better recognized with a higher level of confidence when displayed with a posed robot hand. Guidelines and principles are provided based on these results for the mechanical design of robotic hands.

View record

Two-handed coordination in robots : by combining two one-handed trajectories based on probabilistic models of taskspace effects (2017)

Human environments and tools are commonly designed to be used by two-handed agents. In order for a robot to make use of human tools or to navigate in a human environment it must be able to use two arms. Planning motion for two arms is a difficult task as it requires taking into account a large number of joints and links and involves both temporal and spatial coordination. The work in this thesis addresses these problems by providing a framework to combine two single-arm trajectories to perform a two-armed task. Inspired by results indicating that humans perform better on motor tasks when focusing on the outcome of their movements rather than their joint motions, I propose a solution that considers each trajectory's effect on the taskspace. I develop a novel framework for modifying and combining one-armed trajectories to complete two-armed tasks. The framework is designed to be as general as possible and is agnostic to how the one-armed trajectories were generated and the robot(s) being used. Physical roll-outs of the individual arm trajectories are used to create probabilistic models of their performance in taskspace using Gaussian Mixture Models. This approach allows for error compensation. Trajectories are combined in taskspace in order to achieve the highest probability of success and task performance quality. The framework was tested using two Barrett WAM robots performing the difficult, two-armed task of serving a ping-pong ball. For this demonstration, the trajectories were created using quintic interpolations of joint coordinates. The trajectory combinations are tested for collisions in the robot simulation tool, Gazebo. I demonstrated that the system can successfully choose and execute the highest-probability trajectory combination that is collision-free to achieve a given taskspace goal. The framework achieved timing of the two single-arm trajectories optimal to within 0.0389 seconds -- approximately equal to the time between frames of the 30 Hz camera. The implemented algorithm successfully ranked the likelihood of success for four out of five serving motions. Finally, the framework's ability to perform a higher-level tasks was demonstrated by performing a legal ping-pong serve. These results were achieved despite significant noise in the data.

View record

Adaptation of inter-limb control during robot-simulated human standing balance (2016)

Re-learning to maintain standing balance in the presence of a paretic lower limb is important for many stroke survivors. Models of inter-limb adaptations of the central nervous system performing its role as the balance controller can aid the development of post-stroke balance therapies. This thesis quantifies such inter-limb adaptations in healthy participants. Two studies examine whether asymmetrically manipulating the limbs’ contributions to simulated standing balance (i.e., ankle torque gains) using a robotic balance platform can shift balance control toward a targeted limb.In the first study, virtually weakening (decreasing the contribution, or input gain, to the simulation from) a limb in the medial-lateral direction significantly shifted weight distribution, but not anterior-posterior torque variance, towards the virtually weakened limb. Asymmetrically manipulated anterior-posterior limb contributions also did not produce observable changes in torque, despite expectations for the balance controller to adapt and prefer using the virtually strengthened (gain-increased) limb.The second study further investigates manipulating anterior-posterior limb contributions and whether the balance controller is optimally adaptive. The protocol’s torque gain values, unlike those of the previous study, required the balance controller to adopt a new strategy to remain upright. The targeted limb was virtually strengthened by a factor of two (gain of two) while the other limb was virtually reversed (gain of negative one). Two measures of balance contribution were calculated using (1) root mean square torque during quiet stance and (2) the balance controller’s frequency response functions identified during perturbed stance. Over a two-day protocol with gains alternating between normal and manipulated values in each day, significant shifts of balance contributions were observed within and between days. The results demonstrate that the central nervous system can adapt inter-limb balance coordination in the absence of sensory feedback that explicitly communicates the asymmetrical manipulation of the balance dynamics. Anterior-posterior torque gain manipulations show promise as therapy for reducing balance asymmetries, which is crucial for restoring the mobility and independence of stroke survivors. As an additional mode of balance therapy, this novel method may enhance the effectiveness of existing stroke rehabilitation programs. Future work will address the applicability of this protocol to patient populations.

View record

The evolution from Rydberg gas to plasma in an atomic beam of Xe : with comparative simulations to a strongly blockaded Rydberg gas of Rb (2015)

We study a supersonic beam of cold, dense, xenon Rydberg atoms as it evolves to an ultracoldplasma. At early times, while the free electron density is low, d-series Rydbergs atoms undergolong-range ℓ-mixing collisions producing states of high orbital angular momentum. These high-ℓstates drive dipole-dipole interactions where Penning ionization provides a seed of electrons in a cloud of Rydberg atoms excited into the 51d state. The electron density increases and reachesthe threshold for avalanche into plasma at 25 μs. After 90 μs the plasma becomes fully formeddeveloping rigidity to a 432 V/cm ionizing field as well as sensitivity to a weak 500 mV/cm field.A shell model was developed to understand the dynamics behind this process.In addition, in collaboration with the Weidemüller group, a model was developed using Penningionization to seed the spontaneous avalanche of a cloud of strongly blockaded Rydberg atoms in a MOT.

View record

The perception and control of weight distribution during sit-to-stand in hemiparetic individuals : can asymmetry be attributed to a sense of effort? (2015)

Hemiparetic stroke survivors often produce asymmetric forces when performing bilateral tasks, despite their perception that the forces are equal. It has been hypothesized that this asymmetry is due to the use of effort – as opposed to force magnitude – as the controlled parameter in bilateral force-matching. That is, human perception of force and weight seems to be based more on the intensity of the outgoing motor command than on afferent feedback. This thesis is focused around an experiment that investigated whether this sense of effort (SOE) plays a dominant role in the control and perception of weight distribution during a functional task, sit-to-stand (STS). Eight chronic stroke survivors and eight healthy controls performed a series of STS trials using a robotic assist device, which employed a rate-controlled, 1-degree-of-freedom rotating seat to allow users to perform the STS movement without having to support their entire body weight. The amount of assistance provided by the device was varied across trials in order to measure STS weight distribution in the context of large, medium, and small load magnitudes. The influence of SOE on the control strategy was assessed by evaluating whether or not the proportion in which the load was distributed between limbs was constant across all load magnitudes. Two types of linear models were fit to each group’s data to quantify the relationship between weight distribution and load: one treating the slope as a fixed parameter, and one incorporating an interaction term. Results suggest that while SOE does influence the employed sensory-motor strategy, afferent feedback is a factor as well. Furthermore, the relative contributions of centrally-generated versus peripherally-generated signals varies among individuals: specifically, SOE has a larger influence on the control strategy of individuals who are more symmetric than those who are more asymmetric. Based on these results, we recommend that improving stroke survivors’ awareness of their movement asymmetries and targeting their perceptual inaccuracies in therapy may serve to facilitate and expedite the rehabilitation process.

View record

Understanding human balance through applied robotics : exploring the roles of ankle motion and the vestibular system in maintaining standing balance (2014)

This thesis details the implementation and application of a robotic system for investigating the role of somatosensory feedback and the human vestibular apparatus in maintaining standing balance.A 6 degree-of-freedom Stewart platform is employed to explore the human balance system in ways not possible during normal standing conditions. This robotic system, RISER (Robot for Interactive Sensory Engagement and Rehabilitation), uses a physics-based model to simulate of a variety of balance conditions for participants while they are secured to the system, making it possible to modify or isolate aspects of the balance control system for study.The first study explores the role of somatosensory feedback using a robotic “ankle-tilt” platform, which was designed and implemented on the RISER system. The new platform enables independent manipulation of the ankles during balance simulations. Results demonstrate that providing accurate somatosensory feedback plays a significant role in improving balance control during standing simulations through reduction of sway amplitude and smoother motion during deliberate sway. The addition and validation of this platform opens new avenues for research involving incorrect, delayed, or partial somatosensory feedback, to study the effects of varying these parameters on balance performance.In the second set of studies, a new technique is developed for investigating the gains and delays of the vestibular organs, employing the ankle-tilt platform. These studies utilized sinusoidal Galvanic Vestibular Stimulation (GVS) to generate an isolated vestibular error signal, producing sensations of motion. The RISER system is used to relate the response to GVS with the response to physical motions. The author investigates the perception and reflex responses to GVS using the RISER system, and demonstrates that sinusoidal GVS and rotation can be combined to produce superimposed perceptions or reflex responses. The author also compares the relationships frequency-dependant phase relationship between GVS and rotation, and finds that they do not conform to prior model expectations. Possible reasons for these discrepancies are examined, and repercussions on the existing understanding of the human balance model are considered.

View record

A human-inspired controller for robot-human object handovers (2012)

No abstract available.

A new platform for studying human balance control : design, validation, and experiments (2012)

This thesis provides insight and novel tools for investigation into the neuromotor control of human standing balance. To maintain upright standing, the human body integrates sensory inputs and activates lower limb muscles in a coordinated manner. Balance control mechanisms are not well understood, largely due to the lack of experimental tools. Existing devices modify sensory information or mechanically disturb the body; however, both approaches can induce unnatural corrective responses. One approach that does not perturb normal control mechanisms is to allow humans to balance in an immersive physics simulator, but no appropriate tool has been available. Furthermore, control models that describe unperturbed (quiet) standing are typically evaluated in computer simulations and rarely experimentally tested by activating human muscles. This thesis presents two studies that seek to answer the questions: a) Can we engage humans in an immersive balancing task decoupled from their body mechanics? b) Which control models accurately characterize standing balance?The first study validates the design of a novel robotic system that enables subjects to safely balance according to a programmable physical model. When programmed with a subject’s own body mechanics, results show that the torque-angle relationship (load stiffness) is similar to that of normal standing, and that load stiffness increases, as expected, with increasing sway frequency. By providing decoupled control over balance physics, this system enables novel investigations into the neural mechanisms of human standing.The second study evaluates proposed control models for quiet standing within a control loop that stimulates human muscle actuation. Two factors differentiate the models: activation type and delay-reducing prediction. All evaluated models successfully balance in the absence of natural muscle activation but increase corrective activity and mechanical effort relative to natural standing. Intermittent activation reduces stimulation energy but increases sway. Prediction reduces sway for the intermittent case only. To develop more accurate control models, future work is recommended to reduce sway during intermittent activation, reduce feedback gains, increase predictor compensation, and vary the setpoint angle.This work contributes to the understanding of balance neurophysiology and may lead to improved control models for body movement in healthy and balance-impaired individuals.

View record

An exploration of a haptic affect loop through use cases (2012)

This work investigates a novel interaction paradigm of implicit, low-attention user control, accomplished by monitoring a user’s physiological state. Additionally, we provide feedback to the user regarding system changes in response to this implicit control through touch. We explored the implicit interaction concept, termed the ‘Haptic-Affect Loop’ (HALO), in the context of two use cases.In the first, we developed a HALO interaction to bookmark then resume listening to an audio stream when interrupted. A user’s galvanic skin response is monitored for orienting responses (ORs) to external interruptions; our prototype automatically bookmarks the media, allowing the user to resume listening from the point he/she is interrupted. In a controlled environment, we found an OR detection accuracy of 84%. We further investigated the usefulness of two forms of haptic feedback for bookmarking: notification of bookmark placement and display of bookmarks during navigation. Results show that haptic notification is able to provide a significant performance benefit in terms of navigation speed when paired with visual-spatial indications of where bookmarks have been placed over conditions with no notification; also, performance was no worse with the haptic display of bookmarks than the visual display. Participants tended to prefer haptic notification at interruption time, and both haptic and visual display of bookmarks at resumption. We used a second use case, music-listening, as a framework for another HALO implementation to estimate user preference for music. In a pilot experiment, we collected physiological and music rating data while participants listened to music. Combining this with structural feature data extracted from the music, we obtained state-space models which were used by a Kalman filter to estimate user rating of music selections. Results currently show poor performance in terms of ability to estimate rating of music both known and unknown to users possibly due to a false assumption of system linearity. The outcome of this effort was a first pass implementation of HALO which provided insight into its strengths and weaknesses. This work enables further exploration in how HALO can benefit interactions in other contexts by providing validation of the technological feasibility, utility and behavior of the Haptic-Affect Loop.

View record

Affecting affect effectively : investigating a haptic-affect platform for guiding physiological responses (2011)

This thesis describes the development of a platform for touch-guided anxiety management via engagement with a robot pet. An existing physiological sensor suite and “Haptic Creature” robot pet are modified to influence user physiological responses through real-time interaction guided by physiological data. Participant reaction to and perception of the platform is then investigated in several experiments, with the results from these experiments used to refine the platform design. Finally, an experiment is conducted with elementary school children to investigate the ability of the platform to serve as a comforting presence during a stressful task.It is found that participants were not able to recognize the Creature mimicking their breathing and heart rates. However, once informed of their physiological link to the Creature they were able to use the motion of this device to gain a better awareness of their own physiological state. In addition, the presence of the Creature and its activities are correlated with changes in heart rate, breathing rate, skin conductance, and heart rate variability. These changes are suggestive of a reduction in anxiety. Overall, participant response to the platform was positive, with many participants reporting that they felt the Creature to be comforting and calming. Children in particular were receptive to the Creature, and eager to use it in their stressful environment of school testing. It is found that care must be taken, however, to ensure the platform is presented in an age-appropriate manner, as sudden changes in Creature state can be alarming to the user.The combination of physiological assessment of user affect with a small, physically comforting robot results in a unique system with the potential to serve as a companion or training aide for children or adults with anxiety disorder, especially in clinical and educational settings.

View record

Biomechanical analysis of assisted sit to stand (2011)

A significant number of non-institutionalized older adults have difficulty rising from a chair. Although there exist several assistive devices to aid with sit to stand, there is a lack of research that compares and analyzes various modes of assisted sit to stand to characterize their relative effectiveness in terms of biomechanical metrics. In addition, few existing assistive devices have been designed specifically to share between the user and the device the force required to rise, an approach that has the benefit of maintaining both the mobility and muscular strength of the user. This thesis advances our understanding of different modes of load-sharing sit to stand through empirical quantification. A specially-designed sit-to-stand test bed with load sharing capabilities was fabricated for human-subjects experiments. In addition to an unassisted rise and a static assist using a grab bar, three mechatronic modes of assist, at the seat, waist and arms, were implemented. The test bed employs a closed-loop load-sharing control scheme to require a user to provide a portion of the effort needed for a successful rise motion. Experiments were performed with 17 healthy older adults using the five aforementioned modes of rise. Force and kinematic sensor measurements obtained during the rise were used as inputs into a biomechanical model of each subject, and each mode of rise was evaluated based on key biomechanical metrics extracted from this model relating to stability, knee effort reduction, and rise trajectory. In addition, a questionnaire was administered to determine subjective response to and preference for each rise type. Results show that the seat and waist assists provide statistically significant improvements in terms of stability and knee effort reduction, while the arm and bar assists do not provide any biomechanical improvement from the unassisted rise. The assists most preferred by the subject were the seat and bar assists. Because of subject preference and biomechanical improvements, of the modes tested, the seat assist was determined to be the best mode of providing assistance with sit to stand.

View record

 
 

If this is your researcher profile you can log in to the Faculty & Staff portal to update your details and provide recruitment preferences.