Human Robot Interaction System
Faces are detected using Viola Jones algorithm which is the most basic and close to human method. This algorithm uses sample pictures of a specific thing in different conditions and tries to learn them exactly like how a child starts to see and name things.
Facial Expression Recognition:
The detected Faces are processed using Chehra 3D Head Pose Estimator for Matlab and Chehra Matlab Fitting Model to detect Head pose and extract facial landmarks. These Landmarks are tracked and processed along with values of pitch, yaw and roll in a learning model to predict the basic expressions of the user.
A customized algorithm is implemented which use a string to generate an order of respective viseme images and duration array. Lip-Sync is created using these two arrays along with the robotic voice from text to speech module.
Cursor is controlled remotely using hand detection and tracking. A grip gesture is used to click on current position.
Currently selected button is highlighted based on the calculation of coordinates of detected hand and the user easy-to-move region. A grip gesture is used to click on the selected button.
A swipe gesture was use to navigate through the options and buttons. Currently selected button is selected using a grip gesture.
Hand Gesture game:
A puzzle game was implemented for the user which uses swipe gesture to arrange the puzzle pieces.
Interactive Pen Game:
The same game was controlled using color detection. It highlights the selectable puzzle piece using the region estimation of a specific color. This color is illuminated using a button to create a different color which is used to select the puzzle piece and move it to the required direction.