3-D Fingertip Touch Force Prediction Using Active Appearance Model
Navid Fallahinia, PhD Candidate
This research will enable a co-robot to detect the individual finger forces of a human partner using a technique that does not interfere with the human’s haptic sense. This ability could be used in a wide range of applications. It would allow a co-worker robot to mimic a grasp demonstrated by a human partner. For rehabilitative purposes, a robot could be used to detect whether the human partner is effectively performing the required exercises and adjust the workout as needed. A security robot might be trained to recognize hostile grasps and respond accordingly.
The whole process of force estimation using fingernail imaging method includes two phases of data collection and image registration. Data is collected using an automated calibration platform that applies forces to the finger while the subject remains relatively passive. Once the images have been collected, they must be registered. In this research a novel method has been developed which uses Active Appearance Model to register all an individual’s data. this method consists of the following steps: (1) selecting the training images, (2) choosing landmark points within those images, (3) forming the Shape, Texture, Appearance, and Search Models, (4) registering all the other images using the Search Model, and finally (5) refining the training set and the subsequent models, if needed.
3D Force Estimation in Grasping Studies Using Fingernail Imaging via Autonomous Robots
Sonoma Harris, PhD Candidate
Fingernail imaging differs from other methods of sensing finger pad force in that it measures the contact force without restricting the haptic senses or requiring that force sensors be precisely placed in pre-specified contact locations. Thus, using fingernail imaging to measure precision grasping force would simplify the detection of human grasp force. This would facilitate interaction between robots and human partners when the measurement of such grasp forces is required, as in machine learning situations or rehabilitative environments.
The objective of this research is three-dimensional force prediction during four-fingered precision grasping, without restricting the haptic sense of the subject or constraining the contact locations of the fingers. Current grasping experiments require either specified contact locations or the use of gloves that restrict the senses of slip, texture, temperature and vibration.