TC9 Semantic Map based Robot Navigation in the Wide Public Places
Time : October 14 (Thu) 14:50-16:20
Room : Room 9 (8F Ara)
Chair : Dr.Sujeong You (KITECH, Korea)
14:50-15:05        TC9-1
Elevator Button Tracking and Localization for Multi-storey Navigation

Arpan Ghosh(Sungkyunkwan University, Korea), Jeong-Won Pyo(SUNGKYUNKWAN UNIVERSITY, Korea), Tae-Yong Kuc(Sung Kyun Kwan University, Korea)

Elevator button recognition in an indoor multi-storey environment has been a challenging task amidst the whole scenario of indoor navigation on a mobile robot. In this paper, we integrate various computer vision approaches for the task of button recognition and tracking in an indoor multi-storey environment. In scenarios, where the frame only consists of partial button information, to absolutely recognize the out-of-frame undetected buttons we used a inter-button topology information to predict and recreate the missing layout.
15:05-15:20        TC9-2
Multi-modal 3D Sensor Data based Obstacle Avoidance Method for Autonomous Driving

Taeyoung Uhm(KIRO, Korea), Jongdeuk Lee(Korea Institute of Robotics and Technology Convergence, Korea), Gi-Deok Bae(KIRO, Korea), Na-Hyun Lee, Young-Ho Choi(Korea Institute of Robotics and Technology Convergence, Korea)

3D sensor data fusion method in Real-time. Dual obstacle avoidance method for short and long ranges. To avoid collisions with obstacles more safely.
15:20-15:35        TC9-3
Human Recognition in a Cluttered Indoor Environment by Sensor Fusion

Sang-Yoon Kim, Sang-Hoon Lee(Sungkyunkwan University, Korea), Tae-Yong Kuc(Sung Kyun Kwan University, Korea)

The detection of human in a cluttered space is an incumbent requirement for a mobile robot functioning in an indoor environment. This paper addresses this issue in a threefold approach method to solve the human detection in an indoor environment by a mobile robot. Firstly, the distance information is obtained from the RGB-D camera and the LiDAR sensor. Then, an object detector YOLOv3 is used for the human classification and detection from the image generated by the RGB-D camera. The bounding box of human contains the depth data of human. Lastly, the selected depth information of human is added to the position information of LiDAR sensor for more accurate localization of the human.
15:35-15:50        TC9-4
A Semantic Navigation Framework for Multi-Floor Building Environment

Sung-Hyeon Joo, Sumaira Manzoor(Sungkyunkwan University, Korea), Tae-Yong Kuc(Sung Kyun Kwan University, Korea)

Autonomous mobile robot navigation in a multi-floor building is a complex task requiring various components: planning, recognition, and localization. Despite the significant progress, an essential issue in a multi-floor environment is to endow the mobile robot with autonomous navigation inside the building via the elevator. Our proposed neuro-inspired cognitive framework provides an efficient solution to this problem based on semantic navigation. The experimental results demonstrate that our framework effectively enables the mobile robot to move to different floors using the elevator autonomously.
15:50-16:05        TC9-5
Object Pose Estimation via Viewpoint Matching using 3D Models

Sujeong You, SANGHOON JI(Korea Institute of Industrial Technology, Korea), JUNHA LEE(Applied Robot Research Devision, Korea)

In this paper, we provide a system that can detect objects that are not registered in advance and build a data set that can automatically learn pose information from a recognition network using a CAD model. In general, object recognition and SLAM algorithms operate independently but complementary. When recognizing an object only with an image, it is difficult to grasp the exact posture or properties required by the SLAM algorithm. However, in order to build a 3d object recognition network, preparations such as modeling and labeling are required in advance, so it is difficult to apply to SLAM. In this paper, the CAD model can be applied to the SLAM algorithm by making artificial images render

<<   1   >>