* Co-first authors equally contributed to this work.
Abstract:
Recent advancements in robotics have led to the development of numerous interfaces to enhance the intuitiveness of robot navigation. However, the reliance on traditional 2D displays imposes limitations on the simultaneous visualization of information. Mixed Reality (MR) technology addresses this issue by enhancing the dimensionality of information visualization, allowing users to perceive multiple pieces of information concurrently. This paper proposes the Mixed Reality-based Robot Navigation Interface using an Optical-see-through MR-beacons (MRNaB), a novel approach that uses MR-beacons created with an “air tap”, situated in the real world. This beacons is persistent, enabling multi-destination visualization and functioning as a signal transmitter for robot navigation, eliminating the need for repeated navigation inputs. Our system is mainly constructed into four primary functions: “Add”, “Move”, “Delete”, and “Select”. These allow for the addition of MR-beacons, location movement, its deletion, and the selection of MR-beacons for navigation purposes, respectively. To validate the effectiveness, we conducted comprehensive experiments comparing MRNaB with traditional 2D navigation systems. The results show significant improvements in user performance, both objectively and subjectively, confirming that the MRNaB enhances navigation efficiency and user experience.
Add
Move
Select
Delete
MRNaB
System Design
System Design
Add
Move
Select
Delete
Our system consists of 3 main modules, which are the robot called ‘Kachaka’, Mixed Reality device as the interface for the user called Hololens2, and the bridge in-between to send and receive the information between those two called ROS 2 Humble. The orange component is our novelty component to support our system. Co-localization is done by using Vuforia by putting a special image target on the floor level which resemble the map on the robot. All of the MR-beacons' coordinate will be measured relative to this image target thus making the transformation to ROS-type of data easier.
In order to access the MR-beacon to navigate the robot, hand menu is used by doing the ”Hand Constraint Palm Up” movement. In the hand menu, there will be beacon button and stage button, stage button is only used for experiment. Beacon button will then further be divided into 6 buttons which are the back button, off button, add button, move button, select button, and delete button. Back button is used to return to the main menu and off button is used to remove all the functionality of the MR-beacon. The main functions for this system are add button, move button, select button, and delete button.
Experiments
2D Interface (Baseline) and MR Interface (MRNaB)
2D Interface (Baseline)
MR Interface (MRNaB)
To evaluate the effectiveness of our system, we conducted an experiment to compare between our proposed system with the 2D system by using computer display and mouse as the baseline as it is more used in daily life.
Environments
Environments
(a) shows the figure of the real-world experiment environment. (b) shows the figure of the experiment environment from 2D map SLAM including the area of each stage which is represented by the hollow box to represent area and filled triangle to represent direction of the robot’s pose in the destination. This area is not shown in the real map.
Results
Total Action Number Before Navigation Measurement Result
Total Action Number Before Navigation Measurement Result
Given that the p-values for stages 1, 2, 3, and Overall were below 0.05, we concluded there was statistical significance across these stages.
It was evident that participants required less beacon actions to navigate the robot accurately using the our system compared to traditional 2D system where our system only required 1.59 tries on average while the 2D system required 2.95 tries on average. We posit that these changes came because in our system, participant was able to navigate the robot directly in the real world compared to the 2D system thus making it easier for participant to navigate the robot directly to the desired location.
Navigation Number Measurement Results Per Task
Navigation Number Measurement Results Per Task
Given that the p-values for all stages were lower than 0.05, we concluded that there is statistical significance in the data presented. It was evident that participants required less navigation for robot to reach its destination using the our system compared to traditional 2D system, where our system only required 1.14 tries on average while the 2D system required 2.30 tries on average. We posit that these difference came because in our system, participant was able to navigate the robot directly in the real world compared to the 2D system where participant needed to shift the attention and did the calculation based on the hint from the map.
MRNaB Video
Deliver Drink
Deliver Drink
Teleoperation via MR-Joycon & TF
MR Joycon
MR Joycon & TF
Multi MR-Beacon
Add Multi MR-Beacon
Navigation using Multi MR-Beacon
Citation
@misc{iglesius2024mrnab,
title={MRNaB: Mixed Reality-based Robot Navigation Interface using Optical-see-through MR-beacon},
author={Eduardo Iglesius and Masato Kobayashi and Yuki Uranishi and Haruo Takemura},
year={2024},
eprint={2403.19310},
archivePrefix={arXiv},
primaryClass={cs.RO}
}
Contact
Masato Kobayashi (Assistant Professor, Osaka University, Japan)