Development of Visual Imitation-based Emotional Expression Learner
In 2014, we extended the work done with CAESAR to prototype a humanoid robot composed of 3D-printed and commercial components. CAESAR's head consists 12 DOF, namely: 1 DOF to lift/lower its eyes, 2 DOF to independently control the horizontal panning of its eyes, 4 DOF to rotate and lift/lower its eyebrows, 2 DOF to move the corners of its lips, 1 DOF to open/close its mouth by controlling the lower jaw, and 2 DOF to enable head pan/tilt. The head's outer shell is composed of five separate pieces: two in the back that close the head and can be opened for maintanence while three in the front independently support the eyebrows, eyes, and mouth to promote modularity. The arms are 30.5 cm long and have 6 DOF each: abduction/adduction and elevation of shoulder, flexion/extension of elbow, pronation/supination of forearm, and flexion/extension and rotation of wrist. Motors that drive the arm joints are held together by anodized aluminum brackets. Each arm has a four-finger gripper with rubber padding that can open up to 95 mm. Two motor controllers drive the smart servos in the arms, eyes, and head, one servo controller drives all RC servos in the face (eyebrows, lips, and jaw), one microcontroller runs low-level algorithms (e.g., PID controllers to track faces), and two single-board computers implement high-level planning and task management, provide Wi-Fi communication, and stream images from two 5 MP cameras embedded in CAESAR's 3D-printed eyes.
Enabling robots to express themselves non-verbally has been shown to influence their perception as approachable social agents. Eliciting such responses from users is critical in establishing certain levels of comfort and trust. Thus, we developed code that allows CAESAR to capture video of users' faces to extract details regarding their facial expressions. Using an active shape modeling (ASM) approach, a set of key points located at the user's eyebrows, mouth, etc., are used to detemine important movements of the user's facial features, or action units, according to the Facial Action Coding System (FACS). CAESAR, after detecting these action units of the user's eyebrows and mouth, determines how it must move its own eyebrows and mouth to mimic the user's expressions.
Design of Humanoid Facial and Arm Expressions for Conveying Basic Emotions
Design and Control of Humanoid Robot with 3D-Printed Expressive Face
Below are some of the projects and resulting publications of my work in employing mounted smartphones as measurement and control platforms for motor-based laboratory test-beds.
Jared Alan Frank, Ph.D.
Researcher / Roboticist
Design and Control of Humanoid Robotic Head for Object Tracking
A novel behavior modulation technique was formulated with the use of emotions as affective states, allowing CAESAR to learn to express varied emotional expressions through its facially expressive components, head, and arms. CAESAR has been modeled with a personality and a fixed set of abilities that contribute towards its expressive behaviors. We present a mechanism to control the expressive subsystem and a strategy for learning expressive behaviors using CAESAR's prior experiences. A user study evaluated the perceived expressiveness of CAESAR for different emotional expressions. In addition, we assessed CAESAR's ability to produce an array of emotions, the time required to produce those emotions, and its ability to interact with users to influence their emotional state.
CAESAR's facial and arm DOF were used to heuristically design expressions for Ekman's six basic emotions: happiness, sadness, anger, surprise, fear, and disgust. These six emotional expressions provided the foundation for social human-robot interaction research conducted with CAESAR from 2015-2017.
CAESAR: A Socially Expressive Humanoid Robot that Learns Emotional Expressions using Behavior Modulation
In early 2013, work began on a robotic head that was mounted on a pan and tilt platform to enable 2 DOF motion. Using small webcams in its eyes and computer vision algorithms, the robot was programmed to detect objects and people's faces in front of it and to track them around the room (as shown in the videos below). This work enabled research to be begin in human-robot interaction topics such as in establishing joint attention. Named CAESAR (Cellular-Accesible, Expressive, Semi-Autonomous Robot), the robot could be monitored and controlled from a user's mobile device.