Development of a Conversing and Body Temperature Scanning Autonomously Navigating Robot to Help Screen for COVID-19 |
Ryan Kim(Choate Rosemary Hall, United States), Hyung Gi Min(Omorobot, Korea) |
Throughout the COVID-19 pandemic, human employees with handheld thermometers and temperature scanning kiosks have been employed for widespread systematic temperature screening. However, both methods have critical issues, such as individuals being unable to physically distance, and the static temperature kiosks causing great inconvenience and inefficiencies due to its immobility. The proposed solution is a conversing, temperature scanning, autonomously navigating robot to help screen for fevers. The robot consists of a custom mobile base, a face-tracking algorithm-controlled manipulator, and an end effector consisting of a thermal camera, smartphone, and AI chatbot. |
|
A Fast Real-time Facial Expression Classifier Deep Learning-based for Human-robot Interaction |
Muhamad Dwisnanto Putro, Duy-Linh Nguyen, Kang-Hyun Jo(University of Ulsan, Korea) |
The work proposes an efficient CNN architecture to recognize human facial expressions that consist of five stages containing a combination of lightweight convolution operations. It introduces the efficient contextual extractor with a partial transfer module to suppress computational compression. The structure of the entire network generates less than a million parameters. The CK+ and KDEF datasets are used as training and test sets to evaluate the performance. As a result, the proposed classifier obtains an accuracy that is competitive with other methods. The efficiency of the classifier has strongly suitable for implementation to edge devices by achieving 43 FPS on a Jetson Nano. |
|
Performance Comparison of Position Controlled Robotic Stage When Force- and Position-Based Disturbance Observers are Implemented |
Kangwagye Samuel, Junyoung Kim, Sehoon Oh(DGIST, Korea) |
Position control of the robotic stage when position-based and force-based disturbance observers (DOB) are implemented and their performances compared is presented in this paper.
Implementation of the DOBs is aimed at improving the quality of force control by suppressing the disturbances within and into the robot mechanical system.
This is because the accuracy of position control is of paramount importance since bad position control affects the reproducibility of the position perturbation which in-turn affects the production of reliable force readings for balance assessment function with the robotic stage.
Simulations and experiments are conducted to evaluate the position control strategies. |
|
Robust Person Following Under Severe Indoor Illumination Changes for Mobile Robots: Online Color-Based Identification Update |
Redhwan Algabri, Mun-Taek Choi(Sungkyunkwan University, Korea) |
In this paper, we propose a robust identifier that has been combined with a deep learning technique to accommodate varying illumination in the ambient lighting of a scene. Moreover, an enhanced online update strategy for the person identification model is used to deal with the challenge of drifting the target person’s appearance changes during tracking. We confirmed the effectiveness of the proposed method through target-following experiments using five different clothing colors in a real indoor environment where the lighting conditions change extremely. |
|
A Novel Affinity Enhancing Method for Human Robot Interaction - Preliminary Study with Proactive Docent Avatar |
Joong-kwang Ko, Dongwoo Koo, Mun Sang Kim(GIST, Korea) |
Affinity generation is key research in HCI field. As intermediaries, digital human is promising for affinity generation. In order to generate affinity, we hypothesized three conditions(proactive, expressive, intuitive) and applied them to the interactive systems which contain virtual avatar. Human appearance recognizer and gesture recognizer by using the Kinect sensor are utilized to attract people proactively. Conversation corpus consists of voices, motions, facial animations, and visuals to impress the human emotionally. Intuitive GUI navigates users to select questions naturally so that user can feel affinity. We will verify the effect of affinity enforcing method for the interactions. |
|
Recognition Of Fingertip Movement Using Optical Flow For Human Machine Interface |
Toru Furukawa, Teruo Yamaguchi(Kumamoto University, Japan) |
The purpose of this research is to develop a new interface that inputs human hand motion information using optical flow. PCs have the physical restriction that they need a flat surface on which to place keyboard and mouse. To overcome this difficulty, we aim to develop an interface that allows cursor movement and character input by moving the hand in the air. It is expected to function as an interface that can move the cursor and input characters correspondingly, when a specific operation is performed. The magnitude and direction of the velocity vector of each finger in a specific motion are obtained by optical flow, and we examine the motion can be discriminated from these parameters. |
|