TA6 Deep Learning and Machine Vision Applications
Time : October 14 (Thu) 09:00-10:30
Room : Room 6 (Online 2F Byang)
Chair : Prof.Sungho Kim (Yeungnam University, Korea)
09:00-09:15        TA6-1
Autonomous Robot Control using Navio2 and Lidar Based SLAM

Wangheon Lee, ByuhngMunn Suhng(Hansei University, Korea)

Recently, an open source controller combined with Raspberry Pi-4 and NAVIO-2 [Navio2Rasp4] has been used as controller for Automobiles, UAVs, Drones, and Yachts. In this study, we developed not only Traxxas racing car using Navio2Rasp4, but also mobile robot only using Raspberry Pi-4 [Rasp4] and 64 channel Auster Lidar as controller. After applying not only Navio2Rasp4 to the Traxass MAX racing car so as to tracking the predefined roving path, but also appliying the mobile robot equipped with Rasp4 in autonomous navigation in indoor, we confirmed successfull execution of the mission given to robots using Navio2Rasp and Rasp4.
09:15-09:30        TA6-2
A Fusion Framework for Multi-Spectral Pedestrian Detection using EfficientDet

Jongchan Kim, Inho Park, Sungho Kim(Yeungnam University, Korea)

In this paper, EfficientDet Fusion Framework for multi-spectral pedestrian detection is constructed through Sum, Max, and Concatenation at the feature level. In the experiment, it was confirmed that the performance improvement of the convergence multispectral network was quantitatively improved by about 10% compared to the single spectral network. In addition, it shows that the shortcomings of a single spectral can be actually compensated through the resulting image. In the future, various fusion studies will be conducted based on the EfficientDet Fusion Framework.
09:30-09:45        TA6-3
Enhanced Dual Adversarial Network for Real Image Noise Removal and Generation using Edge Loss Function

Eunho Lee, Youngbae Hwang(Chungbuk National University, Korea)

Many methods have been proposed to address the real noise, they suffer from restoring the edge regions appropriately. Because most convolutional neural network-based denoising methods capture noise characteristics through pixel loss that only detects contaminated pixels, high frequency components cannot be considered. This causes blurs and artifacts on edge regions which has the high frequency component. In this paper, we apply an edge loss function to the dual adversarial network to deal with this issue. Using the edge loss and the pixel loss together, the network has been improved to restore not only the actual intensity but also the edges effectively.
09:45-10:00        TA6-4
YOLO-based Robotic Grasping

Munhyeong Kim, Sungho Kim(Yeungnam University, Korea)

Waste is causing a lot of problems around the world and there is a problem of poor recycling. To separate garbage collection, various wastes shall be detected and recognized, which shall be carried out in real time. To address these issues, this paper proposes YOLO-based robotic grasping methods. The limitations of existing deep learning-based robotic grasping methods predict grasping points in all images and do not recognize objects. Considering this, we perform object detection and capture point derivation by processing images with the proposed area-limiting technique after detection and recognition based on YOLO, an one-stage object detection.
10:00-10:15        TA6-5
Analysis of the influence of 3D-CNN on spatial random information in hyperspectral image classification

Byungjin Kang, Sungho Kim(Yeungnam University, Korea), Changmin Ok(LIG Nex1, Korea)

In the hyperspectral image, the type of material can be known by using the spectral information. Recently, hyperspectral image classification using deep learning has been developed. Among them, 3D-CNN, which learns spatial information and spectral information together, has excellent performance. However, since 3D-CNN learns spatial information and spectral information together, there is a possibility that the spectral information is diluted. This does not correspond to the hyperspectral image in which the spectral information is significant. This paper suggests that 3DCNN is not doing the right thing to learn about spectral information in hyperspectral image classification. In addition, hyperspectral data with random spatial information is verified through experiments.

<<   1   >>