GPGPU2018
Final Project
Real-time Human, Object, Pose Detection for Service Robots
SERVICES
Human / Object
Detection
In this project, we apply YOLO (You Only Look Once) on the Microsoft Kinect camera in order to achieve real detection on human and objects. After that, we calculate the position of human and objects relative to robot.
Human Pose
Detection
We adapt Open Pose project from Carnegie Mellon University to grab human joints and classify the pose so that the robot knows what the person is doing.
Robot
Navigation
After the robot calculates the position of human and objects, it can move to the place in need through PID control of given the position and outputs the velocity.
ABOUT
In order to adjust the population ageing and busy social atmosphere, the development of service robots is gradually increasing. Its purpose is not only to assist human but accompany elder people living alone. Therefore, we try to apply basic real-time tracking on Pioneer robot so as to mimic a social robot.
PROJECTS
Click pictures for more information!
DEMO VIDEOS
We provides two videos as the demo of our system. The Pioneer robot is triggered by human voice command and completes the task of tracking either the person or object.
Come to me
First, the human say "Come to me" to the pad. The robot will track the human and move toward he/she after receiving the messages.
Grab a chair
The person say "Grab a chair" to the pad. The robot will detect the human as well as the direction he/she is pointing at and turn around. After that, the robot will move toward the object based on the images and depth information from the depth camera.
CONTACT
Warren Chan
email: r06921017@ntu.edu.tw
Department of Electrical Engineering
National Taiwan University
Taipei, Taiwan
Simon Guo
email: d05944019@ntu.edu.tw
Department of Computer Science
National Taiwan University
Taipei, Taiwan