FB10 Image Processing II
Time : October 15 (Fri) 13:00-14:30
Room : Room 10 (8F Ora)
Chair : Prof.Tohru Kamiya (Kyushu Institute of Technology, Japan)
13:00-13:15        FB10-1
Environment Recognition from A Spherical Camera Image Based on DeepLab v3+

Yuta Nishida, Tohru Kamiya(Kyushu Institute of Technology, Japan)

In Japan, the number of people using electric wheelchairs is increasing as the population ages, but the increase in traffic accidents is becoming a problem. In this paper, we propose an image analysis method using panoramic images acquired from a spherical camera for the development of an autonomous electric wheelchair. To analyze the images, we use DeepLab v3+, a semantic segmentation algorithm based on Convolutional Neural Network (CNN). In the proposed method, we built a new CNN model by adding Deformable Convolution, SE-block, and MobileNet v2 to DeepLab v3+ and verified its usefulness.
13:15-13:30        FB10-2
A Classification Method for Magnetic Particle Testing Image Using U-Net

Shunsuke Moritsuka, Tohru Kamiya(Kyushu Institute of Technology, Japan)

MPT (Magnetic Particle Testing) is a method of testing for the presence of defects without destroying the object being tested. However, this method has some problems such as the possibility of missing defects. To solve these problems, we developed a classification method of defect images by deep learning. The proposed method performs segmentation based on the structure of U-Net and uses the result of the segmentation to classify the defects. Using this method, defects were classified from the images obtained during MPT. The results showed that Accuracy of 85.8οΌ…, TPR of 65.2οΌ…, and FPR of 13.8οΌ….
13:30-13:45        FB10-3
Automatic Identification of CTC in Fluorescence Microscope Images Using Segmentation Algorithm of Cell Nucleus

Kazuki Hashimoto, Tohru Kamiya(Kyushu Institute of Technology, Japan)

We propose a method for automatic identification of CTCs from fluorescence microscopy images to enable quantitative analysis by computer for the diagnosis of CTCs in blood. First, after detecting the cell candidate regions mainly by filtering, we set the region of interest in the cell candidate regions and reconstruct the region of interest by cutting out the cell nucleus region. In this paper, we applied the proposed method to 5,040 images of 6 samples and conducted experiments on the identification of CTCs. As a result, the number of detections was 148(TPR=100%) and the number of over-detected non-CTCs was 988.
13:45-14:00        FB10-4
Determination of Abnormality of IGBT Images Using VGG16

Toui Ogawa, tohru kamiya(Kyushu Institute of Technology, Japan)

In this paper, we proposes a CNN that classifies power device ultrasound images obtained during power cycle testing.γ€€Especially, we implement a Cycle-GAN to extend the abnormal data and classify the known image based on improved VGG16. As an experimental result, classification accuracy of π‘ƒπ‘Ÿπ‘’π‘π‘–π‘ π‘–π‘œπ‘› = 97.06%, π‘…π‘’π‘π‘Žπ‘™π‘™ = 93.58%, 𝐹 βˆ’ π‘šπ‘’π‘Žπ‘ π‘’π‘Ÿπ‘’ = 95.17% were obtained.
14:00-14:15        FB10-5
Incorporating Ghost Module into RCAN for Super-Resolution of Satellite Images

Hiromu Ikeda, Tohru Kamiya(Kyushu Institute of Technology, Japan)

Recently, deep learning technique has been proposed to increase the resolution of images. However it requires a large number of learning parameters, which results in huge computational cost. To overcome this problem, we develop a new deep learning model based on ghost module to reduce the parameters while maintaining the quality of results. Comparing to the classical convolutional neural network module based methods, the number of parameters used in our model was reduced 49.31% but keeping the same level of Peak Signal - to - Noise Ratio (24.1578) and Structural Similarity (0.7174).

<<   1   >>