WB9 Vision-based Navigation and Robot Manipulation
Time : October 13 (Wed) 13:00-14:30
Room : Room 9 (8F Ara)
Chair : Prof.Hyunjin Choi (Sangmyung University, Korea)
13:00-13:15        WB9-1
Realtime Corridor Detection for Mobile Robot Navigation with Hough Transform Using a Depth Camera

Ahmet Saglam, Yiannis Papelis(Old Dominion University, United States)

We present a novel method for real-time corridor detection using a single depth camera. Our aim is to find a corridor just by looking at walls, even if there are some objects inside, such as a trash can or a chair. Once the corridor has been determined, the robot can be moved smoothly along the hallway semi-autonomously with simple commands without hitting the walls. The proposed work combines layers of occupancy-maps extracted from Point Cloud into one final occupancy grid-map, where 2D Hough Transform is applied to extract the lines corresponding to the corridor walls.
13:15-13:30        WB9-2
Image-Goal Navigation Algorithm using Viewpoint Estimation

Obin Kwon, Songhwai Oh(Seoul National University, Korea)

This paper tackles the image-goal navigation problem, in which a robot needs to find a goal pose based on the target image. The proposed algorithm estimates the geometric information between the target pose and the current pose of the robot. Using the estimated geometric information, the navigation policy predicts the most appropriate actions to reach the target pose. We evaluated our method using the Habitat simulator with the Gibson dataset, which provides photo-realistic indoor environments. The experimental results show that adding an ability to estimate the geometric information helps the agent find the target pose much more successfully and time-efficiently.
13:30-13:45        WB9-3
Semantic Mapping Based on Image Feature Fusion in Indoor Environments

Cong Jin, Armagan Elibol(JAIST, Japan), Pengfei Zhu(Tianjin University, China), Nak-Young Chong(JAIST, Japan)

In this paper, we integrate the RGB feature information extracted by the classification network and the detection network to improve the robot's scene recognition ability and make the acquired semantic information more accurate. The image segmentation algorithm labels the areas of interest in the metric map. Furthermore, the fusion algorithm is incorporated to obtain the semantic information of each area, and the detection algorithm recognizes the key objects in the area. We have demonstrated an efficient combination of semantic information with the occupancy grid map.
13:45-14:00        WB9-4
Learning Latent Dynamics from Multi-View Observations for Image-Based Control

Mineui Hong, Songhwai Oh(Seoul National University, Korea)

In this paper, we present a method to leverage the data collected from different observation models (e.g. different camera view-points, or object colors) to enhance the sample efficiency for learning the latent representation of the target domain. To this end, our method utilizes individual encoders for each observation model, and the encoders are trained with a cyclic loss function we propose, to learn the shared latent representation of the observations.
14:00-14:15        WB9-5
Trajectory Prediction & Path Planning for an Object Intercepting UAV with a Mounted Depth Camera

Jasper Tan, Arijit Dasgupta, Arjun Agrawal, Sutthiphong Srigrarom(National University of Singapore, Singapore)

A novel control & software architecture using ROS C++ is introduced for object interception by a UAV with a mounted depth camera and no external aid. We design the UAV architecture to be completely on-board capable of object interception with the use of a depth camera and point cloud processing. The architecture uses an iterative trajectory prediction algorithm for non-propelled objects like a ping-pong ball. A variety of path planning approaches to object interception and their corresponding scenarios are discussed, evaluated & simulated in Gazebo. The successful simulations exemplify the potential of using the proposed architecture for the onboard autonomy of UAVs intercepting objects.
14:15-14:30        WB9-6
Geometric Understanding of Reward Function in Multi-Agent Visual Exploration

Minyoung Hwang, Obin Kwon, Songhwai Oh(Seoul National University, Korea)

Reward shaping has proven to be a powerful tool to improve performance in reinforcement learning. In this paper, we focus on the multi-agent visual exploration task; to explore novel environments as much as possible in a time-efficient way. We present a new reward shaping method with geometric understanding. Mutual and self-overlapping rewards are designed to improve the performance. Experiments show that linearly modeled mutual-overlapping reward function enhances the coverage performance and saves total timesteps spent for exploration. Furthermore, we achieved the highest performance when we used global and local self-overlapping reward in addition to the linear mutual-overlapping reward.

<<   1   >>