Research

Z. Erickson, H. M. Clever, G. Turk, C. K. Liu, and C. C. Kemp, "Deep Haptic Model Predictive Control for Robot-Assisted Dressing", 2017. [arXiv] [GitHub] [abstract]
a
Abstract – Robot-assisted dressing offers an opportunity to benefit the lives of many people with disabilities, such as some older adults. However, robots currently lack common sense about the physical implications of their actions on people. The physical implications of dressing are complicated by non-rigid garments, which can result in a robot indirectly applying high forces to a person's body. We present a deep recurrent model that, when given a proposed action by the robot, predicts the forces a garment will apply to a person's body. We also show that a robot can provide better dressing assistance by using this model with model predictive control. The predictions made by our model only use haptic and kinematic observations from the robot's end effector, which are readily attainable. Collecting training data from real world physical human-robot interaction can be time consuming, costly, and put people at risk. Instead, we train our predictive model using data collected in an entirely self-supervised fashion from a physics-based simulation. We evaluated our approach with a PR2 robot that attempted to pull a hospital gown onto the arms of 10 human participants. With a 0.2s prediction horizon, our controller succeeded at high rates and lowered applied force while navigating the garment around a persons fist and elbow without getting caught. Shorter prediction horizons resulted in significantly reduced performance with the sleeve catching on the participants' fists and elbows, demonstrating the value of our model's predictions. These behaviors of mitigating catches emerged from our deep predictive model and the controller objective function, which primarily penalizes high forces.
Z. Erickson, M. Collier, A. Kapusta, and C. C. Kemp, "Tracking Human Pose During Robot-Assisted Dressing using Single-Axis Capacitive Proximity Sensing", 2017. [arXiv] [GitHub] [abstract]
a
Abstract – Dressing is a fundamental task of everyday living and robots offer an opportunity to assist people with motor impairments. While several robotic systems have explored robot-assisted dressing, few have considered how a robot can manage errors in human pose estimation, or adapt to human motion in real time during dressing assistance. In addition, estimating pose changes due to human motion can be challenging with vision-based techniques since dressing is often intended to visually occlude the body with clothing. We present a method to track a person's pose in real time using capacitive proximity sensing. This sensing approach gives direct estimates of distance with low latency, has a high signal-to-noise ratio, and has low computational requirements. Using our method, a robot can adjust for errors in the estimated pose of a person and physically follow the contours and movements of the person while providing dressing assistance. As part of an evaluation of our method, the robot successfully pulled the sleeve of a hospital gown and a cardigan onto the right arms of 10 human participants, despite arm motions and large errors in the initially estimated pose of the person's arm. We also show that a capacitive sensor is unaffected by visual occlusion of the body and can sense a person's body through fabric clothing.
Z. Erickson, S. Chernova, and C. C. Kemp, "Semi-Supervised Haptic Material Recognition for Robots using Generative Adversarial Networks", 1st Annual Conference on Robot Learning (CoRL 2017), 2017. [pdf] [GitHub] [Poster] [abstract]
a
Abstract – Material recognition enables robots to incorporate knowledge of material properties into their interactions with everyday objects. For instance, material recognition opens up opportunities for clearer communication with a robot, such as "bring me the metal coffee mug", and having the ability to recognize plastic versus metal is crucial when using a microwave or oven. However, collecting labeled training data can be difficult with a robot, whereas many forms of unlabeled data could be collected relatively easily during a robot's interactions. We present a semi-supervised learning approach for material recognition that uses generative adversarial networks (GANs) with haptic features such as force, temperature, and vibration. Our approach achieves state-of-the-art results and enables a robot to estimate the material class of household objects with ~90% accuracy when 92% of the training data are unlabeled. We explore how well this generative approach can recognize the material of new objects and we discuss challenges facing this generalization. In addition, we have released the dataset used for this work which consists of time-series haptic measurements from a robot that conducted thousands of interactions with 72 household objects.
Z. Erickson, A. Clegg, W. Yu, G. Turk, C. K. Liu, and C. C. Kemp, "What Does the Person Feel? Learning to Infer Applied Forces During Robot-Assisted Dressing", 2017 IEEE International Conference on Robotics and Automation (ICRA), 2017. [pdf] [GitHub] [abstract]
a
Abstract – During robot-assisted dressing, a robot manipulates a garment in contact with a person's body. Inferring the forces applied to the person's body by the garment might enable a robot to provide more effective assistance and give the robot insight into what the person feels. However, complex mechanics govern the relationship between the robot's end effector and these forces. Using a physics-based simulation and data-driven methods, we demonstrate the feasibility of inferring forces across a person's body using only end effector measurements. Specifically, we present a long short-term memory (LSTM) network that at each time step takes a 9-dimensional input vector of force, torque, and velocity measurements from the robot's end effector and outputs a force map consisting of hundreds of inferred force magnitudes across the person's body. We trained and evaluated LSTMs on two tasks: pulling a hospital gown onto an arm and pulling shorts onto a leg. For both tasks, the LSTMs produced force maps that were similar to ground truth when visualized as heat maps across the limbs. We also evaluated their performance in terms of root-mean-square error. Their performance degraded when the end effector velocity was increased outside the training range, but generalized well to limb rotations. Overall, our results suggest that robots could learn to infer the forces people feel during robot-assisted dressing, although the extent to which this will generalize to the real world remains an open question.
D. Park, Z. Erickson, T. Bhattacharjee, and C. C. Kemp, "Multimodal Execution Monitoring for Anomaly Detection During Robot Manipulation", in 2016 IEEE International Conference on Robotics and Automation (ICRA), 2016. [pdf] [GitHub] [abstract]
a
Abstract – Online detection of anomalous execution can be valuable for robot manipulation, enabling robots to operate more safely, determine when a behavior is inappropriate, and otherwise exhibit more common sense. By using multiple complementary sensory modalities, robots could potentially detect a wider variety of anomalies, such as anomalous contact or a loud utterance by a human. However, task variability and the potential for false positives make online anomaly detection challenging, especially for long-duration manipulation behaviors. In this paper, we provide evidence for the value of multimodal execution monitoring and the use of a detection threshold that varies based on the progress of execution. Using a data-driven approach, we train an execution monitor that runs in parallel to a manipulation behavior. Like previous methods for anomaly detection, our method trains a hidden Markov model (HMM) using multimodal observations from non-anomalous executions. In contrast to prior work, our system also uses a detection threshold that changes based on the execution progress. We evaluated our approach with haptic, visual, auditory, and kinematic sensing during a variety of manipulation tasks performed by a PR2 robot. The tasks included pushing doors closed, operating switches, and assisting ablebodied participants with eating yogurt. In our evaluations, our anomaly detection method performed substantially better with multimodal monitoring than single modality monitoring. It also resulted in more desirable ROC curves when compared with other detection threshold methods from the literature, obtaining higher true positive rates for comparable false positive rates.
a
D. Park, Y. Kim, Z. M. Erickson, and C. C. Kemp, "Towards Assistive Feeding with a General-Purpose Mobile Manipulator", in ICRA 2016 workshop on Human-Robot Interfaces for Enhanced Physical Interactions, 2016. [pdf] [GitHub] [abstract]
a
Abstract – General-purpose mobile manipulators have the potential to serve as a versatile form of assistive technology. However, their complexity creates challenges, including the risk of being too difficult to use. We present a proof-of-concept robotic system for assistive feeding that consists of a Willow Garage PR2, a high-level web-based interface, and specialized autonomous behaviors for scooping and feeding yogurt. As a step towards use by people with disabilities, we evaluated our system with 5 able-bodied participants. All 5 successfully ate yogurt using the system and reported high rates of success for the system's autonomous behaviors. Also, Henry Evans, a person with severe quadriplegia, operated the system remotely to feed an able-bodied person. In general, people who operated the system reported that it was easy to use, including Henry. The feeding system also incorporates corrective actions designed to be triggered either autonomously or by the user. In an offline evaluation using data collected with the feeding system, a new version of our multimodal anomaly detection system outperformed prior versions.
QuickLaunchInterface Z. Erickson and S. Foley, "On Ramp to Parallel Computing", in Midwest Instruction and Computing Symposium (MICS), 2014. [pdf] [GitHub] [abstract]
a
Abstract – Today parallel computers are used in almost every field of study. However they are difficult to learn to use. Parallel computers are built for performance and must be used via the command-line. Our project aims to increase the ease of use of parallel computers. Through a web interface users will be able to interactively launch jobs on a parallel computer hosted at their university. Users will gradually learn the tools needed to use parallel systems directly, while still being able to do meaningful computation, right from the beginning. As users feel more comfortable with parallel computing, they will be given more control of the detailed build, configuration and launch settings. Throughout this paper we will discuss the high level design and goals of this project, as well as the implementation details of the prototype system we have developed. We would also like to invite users to evaluate the system.
Socially interactive robots are continuing to see increasing prevalence in our daily lives. As such, there is interest to expand exposure to human-robot interaction throughout academia. However, robots can be costly to purchase and build for an entire class of students. We designed Pypr Robots, a set of affordable social robots which draw upon low-cost prototyping from human-computer interaction. Pypr Robots cost only ~$55, yet are highly customizable and can be built in just a few hours even without prior robotics experience, expensive tools, or 3D printers. Find out more about this project on GitHub.
GoogleGlassPassword Wearable devices, such as Google Glass or Microsoft HoloLens, have security issues due to an integrated first-person camera. Malware on such a device can quietly access the camera and publicize everything a user does. As part of the TRUST REU program at UC Berkeley, I looked at ways to improve security for wearable devices with Professor Dawn Song. While at Berkeley, I designing a solution using convolutional neural networks to visually detect sensitive or private information within a camera's view and heighten security on the camera to prevent malicious activities.