MRNaB: Mixed Reality-based Robot Navigation Interface using Optical-see-through MR-beacon

Osaka University

Eduardo Iglesius*, Masato Kobayashi*, Yuki Uranishi, Haruo Takemura
* Co-first authors equally contributed to this work.

Abstract:
Recent advancements in robotics have led to the development of numerous interfaces to enhance the intuitiveness of robot navigation. However, the reliance on traditional 2D displays imposes limitations on the simultaneous visualization of information. Mixed Reality (MR) technology addresses this issue by enhancing the dimensionality of information visualization, allowing users to perceive multiple pieces of information concurrently. This paper proposes Mixed reality-based robot navigation interface using an optical-see-through MR-beacon (MRNaB), a novel approach that incorporates an MR-beacon, situated atop the real-world environment, to function as a signal transmitter for robot navigation. This MR-beacon is designed to be persistent, eliminating the need for repeated navigation inputs for the same location. Our system is mainly constructed into four primary functions: “Add”, “Move”, “Delete”, and “Select”. These allow for the addition of a MR-beacon, location movement, its deletion, the selection of MR-beacon for navigation purpose, respectively. The effectiveness of the proposed method was then validated through experiments by comparing it with the traditional 2D system. As the result, MRNaB was proved to increase the performance of the user when doing navigation to a certain place subjectively and objectively.

Add
Add
Move
Move
Move
Select
Move
Delete
MRNaB

System Design

System Design
Add
Add
Move
Move

Move
Select
Move
Delete
Our system consists of 3 main modules, which are the robot called ‘Kachaka’, Mixed Reality device as the interface for the user called Hololens2, and the bridge in-between to send and receive the information between those two called ROS 2 Humble. The orange component is our novelty component to support our system. Co-localization is done by using Vuforia by putting a special image target on the floor level which resemble the map on the robot. All of the MR-beacons' coordinate will be measured relative to this image target thus making the transformation to ROS-type of data easier.

In order to access the MR-beacon to navigate the robot, hand menu is used by doing the ”Hand Constraint Palm Up” movement. In the hand menu, there will be beacon button and stage button, stage button is only used for experiment. Beacon button will then further be divided into 6 buttons which are the back button, off button, add button, move button, select button, and delete button. Back button is used to return to the main menu and off button is used to remove all the functionality of the MR-beacon. The main functions for this system are add button, move button, select button, and delete button.

Experiments

2D Interface (Baseline) and MR Interface (MRNaB)

2D Interface (Baseline)
2D Interface (Baseline)
MR Interface (MRNaB)
MR Interface (MRNaB)

To evaluate the effectiveness of our system, we conducted an experiment to compare between our proposed system with the 2D system by using computer display and mouse as the baseline as it is more used in daily life.

Environments

Environments
(a) shows the figure of the real-world experiment environment. (b) shows the figure of the experiment environment from 2D map SLAM including the area of each stage which is represented by the hollow box to represent area and filled triangle to represent direction of the robot’s pose in the destination. This area is not shown in the real map.

Results

Total Action Number Before Navigation Measurement Result

Total Action Number Before Navigation Measurement Result
Given that the p-values for stages 1, 2, 3, and Overall were below 0.05, we concluded there was statistical significance across these stages. It was evident that participants required less beacon actions to navigate the robot accurately using the our system compared to traditional 2D system where our system only required 1.59 tries on average while the 2D system required 2.95 tries on average. We posit that these changes came because in our system, participant was able to navigate the robot directly in the real world compared to the 2D system thus making it easier for participant to navigate the robot directly to the desired location.

Navigation Number Measurement Results Per Task
Given that the p-values for all stages were lower than 0.05, we concluded that there is statistical significance in the data presented. It was evident that participants required less navigation for robot to reach its destination using the our system compared to traditional 2D system, where our system only required 1.14 tries on average while the 2D system required 2.30 tries on average. We posit that these difference came because in our system, participant was able to navigate the robot directly in the real world compared to the 2D system where participant needed to shift the attention and did the calculation based on the hint from the map.

MRNaB Video

Deliver Drink
Deliver Drink
Teleoperation via MR-Joycon & TF
MR Joycon
MR Joycon & TF
Multi MR-Beacon
Add Multi MR-Beacon
Navigation using Multi MR-Beacon

Citation


@misc{iglesius2024mrnab,
    title={MRNaB: Mixed Reality-based Robot Navigation Interface using Optical-see-through MR-beacon},
    author={Eduardo Iglesius and Masato Kobayashi and Yuki Uranishi and Haruo Takemura},
    year={2024},
    eprint={2403.19310},
    archivePrefix={arXiv},
    primaryClass={cs.RO}
}

Contact

Masato Kobayashi (Assistant Professor, Osaka University, Japan)