Jared Alan Frank, Ph.D.

Researcher / Roboticist

Previous work has shown that mobile mixed-reality has the potential to enable more portable, affordable, and engaging interactions with not only laboratory test-beds but with robots as well. While most research efforts seek to advance the robot or install advanced equipment in the environment for achieving such interactions, we explored providing interfaces on mobile devices whose software and hardware are already sufficiently advanced. We expect that the mobile mixed-reality methodology will provide several benefits for user interactions with mobile robots that are significant improvements on approaches taken with overhead cameras. Specifically, with visual markers affixed in the robot´s environment, mobile mixed-reality interfaces can provide users with many of the same advantages uncovered in previous works with laboratory test-beds and robotic manipulation applications. Namely, tasks may be seamlessly performed by robots both indoors and outdoors, without any significant computer station installed, using markers in the environment as the spatial reference for positioning and navigating the mobile robot. First, the development of a mobile mixed-reality interface for interacting with a small, simple mobile robot will be finished. Then, participants will be given the interface and asked to complete a mobile manipulation task with objects placed about the environment. To evaluate the performance achieved by human-robot collaborations using the proposed interfaces, quantitative metrics such as the success rate and completion time of the task will be collected. Further, qualitative information, such as users’ task load, will be collected using the NASA Task Load Index. From this data, this study will provide insights into the features and affordances of mobile mixed-reality in the context of interaction with mobile robots.

An Interactive AR Approach for Autonomous Path Navigation of Mobile Robots

Mobile Robotics

Previous work has shown that mobile mixed-reality has the potential to enable more portable, affordable, and engaging interactions with not only laboratory test-beds but with robots as well. While most research efforts seek to advance the robot or install advanced equipment in the environment for achieving such interactions, we explored providing interfaces on mobile devices whose software and hardware are already sufficiently advanced. We expect that the mobile mixed-reality methodology will provide several benefits for user interactions with mobile robots that are significant improvements on approaches taken with overhead cameras. Specifically, with visual markers affixed in the robot´s environment, mobile mixed-reality interfaces can provide users with many of the same advantages uncovered in previous works with laboratory test-beds and robotic manipulation applications. Namely, tasks may be seamlessly performed by robots both indoors and outdoors, without any significant computer station installed, using markers in the environment as the spatial reference for positioning and navigating the mobile robot. First, the development of a mobile mixed-reality interface for interacting with a small, simple mobile robot will be finished. Then, participants will be given the interface and asked to complete a mobile manipulation task with objects placed about the environment. To evaluate the performance achieved by human-robot collaborations using the proposed interfaces, quantitative metrics such as the success rate and completion time of the task will be collected. Further, qualitative information, such as users’ task load, will be collected using the NASA Task Load Index. From this data, this study will provide insights into the features and affordances of mobile mixed-reality in the context of interaction with mobile robots.

Enabling natural and intuitive communication with robots calls for the design of intelligent user interfaces. As robots are introduced into applications with novice users, the information obtained from such users may not always be reliable. This paper describes a user interface approach to process and correct intended paths for robot navigation as sketched by users on a touchscreen. Our approach demonstrates that by processing video frames from an overhead camera and by using composite Bézier curves to interpolate smooth paths from a small set of significant points, low-resolution occupancy grid maps (OGMs) with numeric potential fields can be continuously updated to correct unsafe user-drawn paths at interactive speeds. The approach generates sufficiently complex paths that appear to bend around static and dynamic obstacles. The results of an evaluation study show that our approach captures the user intent while relieving the user from being concerned about her path-drawing abilities. This work was published and presented at the International Conference on Intelligent User Interfaces (IUI) in Atlanta, GA in April 2015.  

As mobile robots experience increased commercialization, development of intuitive interfaces for human-robot interaction gains paramount importance to promote pervasive adoption of such robots in society. Although smart devices may be useful to operate robots, prior research has not fully investigated the appropriateness of various interaction elements (e.g., touch, gestures, sensors, etc.) to render an effective human-robot interface. This paper provides overviews of a mobile manipulator and a tablet-based application to operate the mobile manipulator. In particular, a mobile manipulator is designed to navigate an obstacle course and to pick and place objects around the course, all under the control of a human operator who uses a tablet-based application. The tablet application provides the user live videos that are captured and streamed by a camera onboard the robot and an overhead camera. In addition, to remotely operate the mobile manipulator, the tablet application provides the user a menu of four interface element options, including, virtual buttons, virtual joysticks, touchscreen gesture, and tilting the device. To evaluate the intuitiveness of the four interface elements for operating the mobile manipulator, a user study is conducted in which participants’ performance is monitored as they operate the mobile manipulator using the designed interfaces. The analysis of the user study shows that the tablet-based application allows even non-experienced users to operate the mobile manipulator without the need for extensive training. This work was published and presented at the American Society of Mechanical Engineers' 12th Biennial Conference on Engineering Systems Design and Analysis (ESDA) in Copenhagen, Denmark in July 2014. 

Comparing Interface Elements on a Tablet for

Intuitive Teleoperation of a Mobile Manipulator

Featured Work

Path Bending: Interactive Human-Robot Interfaces With Collision-Free Correction of User-Drawn Paths


Augmented reality (AR) offers a valuable user interface methodology that can enable users to effectively interact with robots. Smart devices, such as smartphones and tablet computers, are convenient platforms to implement such an approach since they provide highly interactive touchscreens and graphics capabilities with unprecedented responsiveness. Moreover, many users already own and are familiar with these devices. This paper presents the development of a path following algorithm that allows a mobile robot to autonomously navigate to a desired location as it is commanded by a user through an AR interface. Using the touchscreen of a tablet computer, users can supervise and command the mobile robot to move to a new location by simply tapping at a desired location on a live overhead view of the robot's environment. A path planner generates the path from the robot's start location to the desired goal location by using a Voronoi diagram. The robot is then commanded to follow the path based on a method that incorporates a virtual representation of the robot whose state is determined from the observations of the actual robot by an overhead camera. A proportional-plus-derivative (PD) controller is used to control each of the virtual robot's wheels and in turn to compute wheel velocity commands that steer the actual robot to guide it towards the desired goal location. The mobile application and the performance of proposed approach are evaluated and validated experimentally. This work was published and presented at the Indian Control Conference (ICC) in Chennai, India in January 2015. 

Development of Mobile Mixed-Reality Interfaces for Mobile Robots

Below are some of the projects and resulting publications of my work in utilizing mobile devices as measurement, control, and interaction platforms for mobile robots. Over the course of these projects, significant efforts had to be performed in mobile application development, video streaming, robotic navigation, path planning, as well as vision-based pose estimation and control.

Mobile Mixed-Reality Interfaces for Human-Swarm Interaction

Development of Algorithms for Human-Robot Collaborative SLAM 

Featured Work

Previous work has shown that mobile mixed-reality has the potential to enable more portable, affordable, and engaging interactions with not only laboratory test-beds but with robots as well. While most research efforts seek to advance the robot or install advanced equipment in the environment for achieving such interactions, we explored providing interfaces on mobile devices whose software and hardware are already sufficiently advanced. We expect that the mobile mixed-reality methodology will provide several benefits for user interactions with mobile robots that are significant improvements on approaches taken with overhead cameras. Specifically, with visual markers affixed in the robot´s environment, mobile mixed-reality interfaces can provide users with many of the same advantages uncovered in previous works with laboratory test-beds and robotic manipulation applications. Namely, tasks may be seamlessly performed by robots both indoors and outdoors, without any significant computer station installed, using markers in the environment as the spatial reference for positioning and navigating the mobile robot. First, the development of a mobile mixed-reality interface for interacting with a small, simple mobile robot will be finished. Then, participants will be given the interface and asked to complete a mobile manipulation task with objects placed about the environment. To evaluate the performance achieved by human-robot collaborations using the proposed interfaces, quantitative metrics such as the success rate and completion time of the task will be collected. Further, qualitative information, such as users’ task load, will be collected using the NASA Task Load Index. From this data, this study will provide insights into the features and affordances of mobile mixed-reality in the context of interaction with mobile robots.