TuP1 Interactive Poster Session
Time : October 12 (Tue) 16:10-17:40
Room : Online, 2F Lobby
Chair : Prof.Seong Young Ko (Chonnam National University, Korea)
16:10-17:40        TuP1-1
A Study on Pneumatic Actuator Control for Industrial Robot

Youngkuk Kwon(Korea Polytechnics, Korea)

Real time wireless control of the pneumatic actuator for industrial robot jig using the wireless magnetic sensor has been implemented. In this system, the interface for the wireless sensor to be responded to the user’s mobile application operations has been implemented to improve the convenience and efficiency. To verify the effectiveness of the proposed system, the data transmission rate and success ratio have been measured.
16:10-17:40        TuP1-2
Image-Based Visual Servoing with Backstepping for the Drone

Whimin Kim, Dong Eui Chang(KAIST, Korea)

This paper proposes an image-based visual servoing controller with a backstepping scheme for drones. For the purpose of controller design, the drone dynamics are simplified by yaw angle decoupling and dynamic expansion. In addition, a virtual image plane is defined to apply image-based visual servoing to the drone system, and a feature position on this plane is used as the output variables of the system. Through simulation, it is shown that the proposed controller is suitable for the task of tracking an object or moving to a point in the camera image.
16:10-17:40        TuP1-3
Goal-Oriented Navigation with Avoiding Obstacle based on Deep Reinforcement Learning in Continuous Action Space

Hien Pham Xuan, Gon-Woo Kim(Chungbuk National University, Korea)

Obstacle avoidance problems using Deep Reinforcement Learning (DRL) are becoming possible solutions for autonomous mobile robots. However, in real-world situations with stationary and moving obstacles, mobile robots must be able to navigate to a goal and safely avoid collisions. This work is an extension of ongoing research on the navigation approach for a mobile robot. We propose to perform the obstacle avoidance algorithm of the mobile robot in both simulated environments and continuous action space of the real world to find optimal results, thereby serving as a premise for further studies.
16:10-17:40        TuP1-4
Deep Learning based-State Estimation for Holonomic Mobile Robots Using Intrinsic Sensors

Nam Van Dinh, Gon-Woo Kim(Chungbuk National University, Korea)

State estimation is a fundamental component of the navigation system of autonomous mobile robots. Generally, the robot setup is equipped with intrinsic and extrinsic sensors. The state estimators have relied almost on intrinsic sensors such as wheel encoders and inertial measurement units in textureless and structureless environments. This paper will analyze and propose the learning state estimation frameworks for the dead-reckoning of autonomous holonomic vehicles based only on intrinsic sensors. First, we review and categories the intrinsic-only estimation problem. Second, we describe the problem formulation using learning-based techniques.
16:10-17:40        TuP1-5
Environment Exploration for Mapless Navigation based on Deep Reinforcement Learning

Toan Duc Nguyen, Gon-Woo Kim(Chungbuk National University, Korea)

In autonomous mobile robots, the reinforcement learning approach can be applied to the mapless navigation problem. The Robot can complete the set tasks well and works well in different environments without maps and ready-made path plans. However, for reinforcement learning in general and mapless navigation based on reinforcement learning in particular, exploitation and exploration balance are issues that need to be carefully considered. With outstanding advantages compared to other approaches, the Boltzmann policy approach has been used in our problem. It helps the Robot explore more thoroughly in complex environments, and the policy is also more optimized.
16:10-17:40        TuP1-6
Semantics Aware Loop Closure Detection in Visual SLAM

Saba Arshad, Gon Woo Kim(Chungbuk National University, Korea)

Loop closure detection is of vital importance to simultaneous localization and mapping for robot motion in an unknown environment. This research reviews the deep learning approaches using semantic information for loop closure detection. In view of shortcomings of the existing research, an improved loop closure detection method is proposed in this research fusing semantic information with a feature-based Bag-of-Words model. RefineNet is used for high-resolution semantic segmentation and dense semantic feature extraction. Semantics being invariant to viewpoint changes and dynamic environment can improve the overall performance of the environment

<<   1 | [2] | [3]   >>