Although gesture-based input and augmented reality (AR) facilitate intuitive human-robot interactions (HRI), prior implementations have relied on research-grade hardware and software. This project explored using tablets to render mixed-reality visual environments that support human-robot collaboration for object manipulation. A mobile interface was created on a tablet by integrating real-time vision, 3D graphics, touchscreen interaction, and wireless communication. This mobile interface augmented live video of physical objects in a robot's workspace with corresponding virtual objects that can be manipulated by users to intuitively command the robot to manipulate the physical objects. By generating the mixed-reality environment on an exocentric view provided by the tablet camera, the interface established a common frame of reference for the user and the robot to effectively communicate spatial information for object manipulation. After addressing challenges due to limitations in mobile sensing and computation, the interface was evaluated with participants to examine the performance and user experience with the proposed approach.

Mobile Mixed-Reality Interfaces that Enhance

Human-Robot Interaction in Shared Spaces

Featured Work

Realizing Mixed-Reality Environments with Tablets for Intuitive Human-Robot Collaboration for Object Manipulation Tasks 

Read the paper

Below are some of the projects and resulting publications of my work in employing mobile devices as measurement, control, and interaction platforms for robotic manipulators. Over the course of these projects, significant efforts had to be performed in robotic manipulation, robotic grasping, as well as vision-based object detection, pose estimation and control.

Robotic

Manipulation

An evaluation with participants was conducted to examine task performance and user experience associated with the proposed mobile mixed-reality strategy in comparison to conventional approaches that utilize egocentric or exocentric views from cameras mounted on the robot or in the environment, respectively. Results indicated that, despite the suitability of the conventional approaches in remote applications, the proposed mobile interface approach can provide comparable task performance and user experiences in shared spaces without the need to install operator stations or vision systems on or around the robot. Moreover, the mobility associated with the proposed approach provides users the flexibility to direct robots from their most natural visual perspective (at the expense of some physical workload), and to leverage the sensing capabilities of the tablet to expand the robot's perceptual range.

Jared Alan Frank, Ph.D.

Researcher / Roboticist