EBookClubs

Read Books & Download eBooks Full Online

EBookClubs

Read Books & Download eBooks Full Online

Book Real time Static Hand Gesture Recognition  Using a Novel Automatic Bubble Standardization Process to Prepare Monochromatic Thermal Hand Images for Gesture Classification

Download or read book Real time Static Hand Gesture Recognition Using a Novel Automatic Bubble Standardization Process to Prepare Monochromatic Thermal Hand Images for Gesture Classification written by James Michael Ballow and published by . This book was released on 2022 with total page 0 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Real time 2D Static Hand Gesture Recognition and 2D Hand Tracking for Human Computer Interaction

Download or read book Real time 2D Static Hand Gesture Recognition and 2D Hand Tracking for Human Computer Interaction written by Pavel Alexandrovich Popov and published by . This book was released on 2020 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: The topic of this thesis is Hand Gesture Recognition and Hand Tracking for user interface applications. 3 systems were produced, as well as datasets for recognition and tracking, along with UI applications to prove the concept of the technology. These represent significant contributions to resolving the hand recognition and tracking problems for 2d systems. The systems were designed to work in video only contexts, be computationally light, provide recognition and tracking of the user's hand, and operate without user driven fine tuning and calibration. Existing systems require user calibration, use depth sensors and do not work in video only contexts, or are computationally heavy requiring GPU to run in live situations. A 2-step static hand gesture recognition system was created which can recognize 3 different gestures in real-time. A detection step detects hand gestures using machine learning models. A validation step rejects false positives. The gesture recognition system was combined with hand tracking. It recognizes and then tracks a user's hand in video in an unconstrained setting. The tracking uses 2 collaborative strategies. A contour tracking strategy guides a minimization based template tracking strategy and makes it real-time, robust, and recoverable, while the template tracking provides stable input for UI applications. Lastly, an improved static gesture recognition system addresses the drawbacks due to stratified colour sampling of the detection boxes in the detection step. It uses the entire presented colour range and clusters it into constituent colour modes which are then used for segmentation, which improves the overall gesture recognition rates. One dataset was produced for static hand gesture recognition which allowed for the comparison of multiple different machine learning strategies, including deep learning. Another dataset was produced for hand tracking which provides a challenging series of user scenarios to test the gesture recognition and hand tracking system. Both datasets are significantly larger than other available datasets. The hand tracking algorithm was used to create a mouse cursor control application, a paint application for Android mobile devices, and a FPS video game controller. The latter in particular demonstrates how the collaborating hand tracking can fulfill the demanding nature of responsive aiming and movement controls.

Book Real time Hand Gesture Recognition in Complex Environments

Download or read book Real time Hand Gesture Recognition in Complex Environments written by Milyn Cecilia Moy and published by . This book was released on 1998 with total page 70 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Challenges and Applications for Hand Gesture Recognition

Download or read book Challenges and Applications for Hand Gesture Recognition written by Kane, Lalit and published by IGI Global. This book was released on 2022-03-25 with total page 249 pages. Available in PDF, EPUB and Kindle. Book excerpt: Due to the rise of new applications in electronic appliances and pervasive devices, automated hand gesture recognition (HGR) has become an area of increasing interest. HGR developments have come a long way from the traditional sign language recognition (SLR) systems to depth and wearable sensor-based electronic devices. Where the former are more laboratory-oriented frameworks, the latter are comparatively realistic and practical systems. Based on various gestural traits, such as hand postures, gesture recognition takes different forms. Consequently, different interpretations can be associated with gestures in various application contexts. A considerable amount of research is still needed to introduce more practical gesture recognition systems and associated algorithms. Challenges and Applications for Hand Gesture Recognition highlights the state-of-the-art practices of HGR research and discusses key areas such as challenges, opportunities, and future directions. Covering a range of topics such as wearable sensors and hand kinematics, this critical reference source is ideal for researchers, academicians, scholars, industry professionals, engineers, instructors, and students.

Book Image Based Real time Hand Gesture Recognition System Design

Download or read book Image Based Real time Hand Gesture Recognition System Design written by 白文榜 and published by . This book was released on 2011 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Novel Methods for Robust Real time Hand Gesture Interfaces

Download or read book Novel Methods for Robust Real time Hand Gesture Interfaces written by Nathaniel Sean Rossol and published by . This book was released on 2015 with total page 110 pages. Available in PDF, EPUB and Kindle. Book excerpt: Real-time control of visual display systems via mid-air hand gestures offers many advantages over traditional interaction modalities. In medicine, for example, it allows a practitioner to adjust display values, e.g. contrast or zoom, on a medical visualization interface without the need to re-sterilize the interface. However, there are many practical challenges that make such interfaces non-robust including poor tracking due to frequent occlusion of fingers, interference from hand-held objects, and complex interfaces that are difficult for users to learn to use efficiently. In this work, various techniques are explored for improving the robustness of computer interfaces that use hand gestures. This work is focused predominately on real-time markerless Computer Vision (CV) based tracking methods with an emphasis on systems with high sampling rates. First, we explore a novel approach to increase hand pose estimation accuracy from multiple sensors at high sampling rates in real-time. This approach is achieved through an intelligent analysis of pose estimations from multiple sensors in a way that is highly scalable because raw image data is not transmitted between devices. Experimental results demonstrate that our proposed technique significantly improves the pose estimation accuracy while still maintaining the ability to capture individual hand poses at over 120 frames per second. Next, we explore techniques for improving pose estimation for the purposes of gesture recognition in situations where only a single sensor is used at high sampling rates without image data. In this situation, we demonstrate an approach where a combination of kinematic constraints and computed heuristics are used to estimate occluded keypoints to produce a partial pose estimation of a user's hand which is then used with our gestures recognition system to control a display. The results of our user study demonstrate that the proposed algorithm significantly improves the gesture recognition rate of the setup. We then explore gesture interface designs for situations where the user may (or may not) have a large portion of their hand occluded by a hand-held tool while gesturing. We address this challenge by developing a novel interface that uses a single set of gestures designed to be equally effective for fingers and hand-held tools without the need for any markers. The effectiveness of our approach is validated through a user study on a group of people given the task of adjusting parameters on a medical image display. Finally, we examine improving the efficiency of training for our interfaces by automatically assessing key user performance metrics (such as dexterity and confidence), and adapting the interface accordingly to reduce user frustration. We achieve this through a framework that uses Bayesian networks to estimate values for abstract hidden variables in our user model, based on analysis of data recorded from the user during operation of our system.

Book Real time Hand Gesture Detection and Recognition for Human Computer Interaction

Download or read book Real time Hand Gesture Detection and Recognition for Human Computer Interaction written by Nasser Hasan Abdel-Qader Dardas and published by . This book was released on 2012 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: This thesis focuses on bare hand gesture recognition by proposing a new architecture to solve the problem of real-time vision-based hand detection, tracking, and gesture recognition for interaction with an application via hand gestures. The first stage of our system allows detecting and tracking a bare hand in a cluttered background using face subtraction, skin detection and contour comparison. The second stage allows recognizing hand gestures using bag-of-features and multi-class Support Vector Machine (SVM) algorithms. Finally, a grammar has been developed to generate gesture commands for application control. Our hand gesture recognition system consists of two steps: offline training and online testing. In the training stage, after extracting the keypoints for every training image using the Scale Invariance Feature Transform (SIFT), a vector quantization technique will map keypoints from every training image into a unified dimensional histogram vector (bag-of-words) after K-means clustering. This histogram is treated as an input vector for a multi-class SVM to build the classifier. In the testing stage, for every frame captured from a webcam, the hand is detected using my algorithm. Then, the keypoints are extracted for every small image that contains the detected hand posture and fed into the cluster model to map them into a bag-of-words vector, which is fed into the multi-class SVM classifier to recognize the hand gesture. Another hand gesture recognition system was proposed using Principle Components Analysis (PCA). The most eigenvectors and weights of training images are determined. In the testing stage, the hand posture is detected for every frame using my algorithm. Then, the small image that contains the detected hand is projected onto the most eigenvectors of training images to form its test weights. Finally, the minimum Euclidean distance is determined among the test weights and the training weights of each training image to recognize the hand gesture. Two application of gesture-based interaction with a 3D gaming virtual environment were implemented. The exertion videogame makes use of a stationary bicycle as one of the main inputs for game playing. The user can control and direct left-right movement and shooting actions in the game by a set of hand gesture commands, while in the second game, the user can control and direct a helicopter over the city by a set of hand gesture commands.

Book Real time Dynamic Hand Shape Gesture Controller

Download or read book Real time Dynamic Hand Shape Gesture Controller written by Rajesh Radhakrishnan and published by . This book was released on 2011 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: The main objective of this thesis is to build a real time gesture recognition system which can spot and recognize specific gestures from continuous stream of input video. We address the recognition of single handed dynamic gestures. We have considered gestures which are sequences of distinct hand poses. Gestures are classified based on their hand poses and its nature of motion. The recognition strategy uses a combination of spatial hand shape recognition using chamfer distance measure and temporal characteristics through dynamic programming. The system is fairly robust to background clutter and uses skin color for tracking. Gestures are an important modality for human-machine communication, and robust gesture recognition can be an important component of intelligent homes and assistive environments in general. Challenging task in a robust recognition system is the amount of unique gesture classes that the system can recognize accurately. Our problem domain is two dimensional tracking and recognition with a single static camera. We also address the reliability of the system as we scale the size of gesture vocabulary. Our system is based on supervised learning, both detection and recognition uses the existing trained models. The hand tracking framework is based on non-parametric histogram bin based approach. A coarser histogram bin containing skin and non-skin models of size 32x32x32 was built. The histogram bins were generated by using samples of skin and non-skin images. The tracker framework effectively finds the moving skin locations as it integrates both the motion and skin detection. Hand shapes are another important modality of our gesture recognition system. Hand shapes can hold important information about the meaning of a gesture, or about the intent of an action. Recognizing hand shapes can be a very challenging task, because the same hand shape may look very different in different images, depending on the view point of the camera. We use chamfer matching of edge extracted hand regions to compute the minimum chamfer matching score. Dynamic Programming technique is used align the temporal sequences of gesture. In this paper, we propose a novel hand gesture recognition system where in user can specify his/her desired gestures vocabulary. The contributions made to the gesture recognition framework are, user-chosen gesture vocabulary (i.e) user is given an option to specify his/her desired gesture vocabulary, confusability analysis of gesture (i.e) During training, if user provides similar gesture pattern for two different gesture patterns the system automatically alerts the user to provide a different gesture pattern for a specific class, novel methodology to combine both hand shape and motion trajectory for recognition, hand tracker (using motion and skin color detection) aided hand shape recognition. The system runs in real time with frame rate of 15 frames per second in debug mode and 17 frames per second in release mode. The system was built in a normal hardware configuration with Microsoft Visual Studio, using OpenCV and C++. Experimental results establish the effectiveness of the system.

Book Image based Gesture Recognition with Support Vector Machines

Download or read book Image based Gesture Recognition with Support Vector Machines written by Yu Yuan and published by ProQuest. This book was released on 2008 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: Recent advances in various display and virtual technologies, coupled with an explosion in available computing power, have given rise to a number of novel human-computer interaction (HCI) modalities, among which gesture recognition is undoubtedly the most grammatically structured and complex. However, despite the abundance of novel interaction devices, the naturalness and efficiency of HCI has remained low. This is due in particular to the lack of robust sensory data interpretation techniques. To address the task of gesture recognition, this dissertation establishes novel probabilistic approaches based on support vector machines (SVM). Of special concern in this dissertation are the shapes of contact images on a multi-touch input device for both 2D and 3D. Five main topics are covered in this work. The first topic deals with the hand pose recognition problem. To perform classification of different gestures, a recognition system must attempt to leverage between class variations (semantically varying gestures), while accommodating potentially large within-class variations (different hand poses to perform certain gestures). For recognition of gestures, a sequence of hand shapes should be recognized. We present a novel shape recognition approach using Active Shape Model (ASM) based matching and SVM based classification. Firstly, a set of correspondences between the reference shape and query image are identified through ASM. Next, a dissimilarity measure is created to measure how well any correspondence in the set aligns the reference shape and candidate shape in the query image. Finally, SVM classification is employed to search through the set to find the best match from the kernel defined by the dissimilarity measure above. Results presented show better recognition results than conventional segmentation and template matching methods. In the second topic, dynamic time alignment (DTA) based SVM gesture recognition is addressed. In particular, the proposed method combines DTA and SVM by establishing a new kernel. The gesture data is first projected into a common eigenspace formed by principal component analysis (PCA) and a distance measure is derived from the DTA. By incorporating DTA in the kernel function, general classification problems with variable-sized sequential data can be handled. In the third topic, a C++ based gesture recognition application for the multi-touchpad is implemented. It uses the proposed gesture classification method along with a recursive neural networks approach to recognize definable gestures in real time, then runs an associated command. This application can further enable users with different disabilities or preferences to custom define gestures and enhance the functionality of the multi-touchpad. Fourthly, an SVM-based classification method that uses the DTW to measure the similarity score is presented. The key contribution of this approach is the extension of trajectory based approaches to handle shape information, thereby enabling the expansion of the system's gesture vocabulary. It consists of two steps: converting a given set of frames into fixed-length vectors and training an SVM from the vectorized manifolds. Using shape information not only yields discrimination among various gestures, but also enables gestures that cannot be characterized solely based on their motion information to be classified, thus boosting overall recognition scores. Finally, a computer vision based gesture command and communication system is developed. This system performs two major tasks: the first is to utilize the 3D traces of laser pointing devices as input to perform common keyboard and mouse control; the second is supplement free continuous gesture recognition, i.e., data gloves or other assistive devices are not necessary for 3D gestures recognition. As a result, the gesture can be used as a text entry system in wearable computers or mobile communication devices, though the recognition rate is lower than the approaches with the assistive tools. The purpose of this system is to develop new perceptual interfaces for human computer interaction based on visual input captured by computer vision systems, and to investigate how such interfaces can complement or replace traditional interfaces. Original contributions of this work span the areas of SVMs and interpretation of computer sensory inputs, such as gestures for advanced HCI. In particular, we have addressed the following important issues: (1) ASM base kernels for shape recognition. (2) DTA based sequence kernels for gesture classification. (3) Recurrent neural networks (RNN). (4) Exploration of a customizable HCI. (5) Computer vision based 3D gesture recognition algorithms and system.

Book Depth Camera Based Hand Gesture Recognition for Training a Robot to Perform Sign Language

Download or read book Depth Camera Based Hand Gesture Recognition for Training a Robot to Perform Sign Language written by Da Zhi and published by . This book was released on 2018 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: This thesis presents a novel depth camera-based real-time hand gesture recognition system for training a human-like robot hand to interact with humans through sign language. We developed a modular real-time Hand Gesture Recognition (HGR) system, which uses multiclass Support Vector Machine (SVM) for training and recognition of the static hand postures and N-Dimensional Dynamic Time Warping (ND-DTW) for dynamic hand gestures recognition. A 3D hand gestures training/testing dataset was recorded using a depth camera tailored to accommodate the kinematic constructive limitations of the human-like robotic hand. Experimental results show that the multiclass SVM method has an overall 98.34% recognition rate in the HRI (Human-Robot Interaction) mode and 99.94% recognition rate in the RRI (Robot-Robot Interaction) mode, as well as the lowest average run time compared to the k-NN (k-Nearest Neighbour) and ANBC (Adaptive Naïve Bayes Classifier) approaches. In dynamic gestures recognition, the ND-DTW classifier displays a better performance than DHMM (Discrete Hidden Markov Model) with a 97% recognition rate and significantly shorter run time. In conclusion, the combination of multiclass SVM and ND-DTW provides an efficient solution for the real-time recognition of the hand gesture used for training a robot arm to perform sign language.

Book Static Hand Gesture Recognition Using Haar like Features

Download or read book Static Hand Gesture Recognition Using Haar like Features written by Kai Sin Wong and published by . This book was released on 2015 with total page 56 pages. Available in PDF, EPUB and Kindle. Book excerpt:

Book Hand Gesture Recognition Using Artificial Neural Networks

Download or read book Hand Gesture Recognition Using Artificial Neural Networks written by Mohd Amrallah Mustafa and published by . This book was released on 2007 with total page 128 pages. Available in PDF, EPUB and Kindle. Book excerpt: Hand gesture has been part of human communication, where, young children usually communicate by using gesture before they can talk. Adults may have also gesture if they need to or they are indeed mute or deaf. Thus the idea of teaching a machine to also learn gesture is very appealing due to its unique mode of communications. A reliable hand gesture recognition system will make the remote control become obsolete. However, ,any of the a new techniques proposed are complicated to be implemented in real time, especially as a human machine interface. This thesis focuses on recognizing hand gesture in static posture. Since static hand postures not only can express some concepts, but also can act as special transition states in temporal gestures recognition, thus estimating static hand postures is in fact a big topics in gesture recognition. A database consists of 200 gesture images have been built, where five volunteers had help in the making of the database. The images were captured in a controlled enviroment and the postures are free from occulation where the background is uncluttered and the hand is assumed to have been localized. A system was then built to recognize the hand gesture. The captured image will be first preprocessed in order to binarize the palm region, where Sobel edge detection technique has been employed, with later followed by morphological operation. A new feature extraction technique has been developed, based on horizontal and vertical states transition count, and the ration of hand area with the respect to whole area of image. These set of features have been proven to have high intra class dissimilarity attributes. In order to have a system that can be easily trained, artificial neural networks has been chosen in the classification stage. A multilayer perceptron with back-propagation algorithm has been developed, thus the system is actually in-built to be used as a human machine interface. The gesture recognition system has been built and tested in Matlab. Where simulations have shown promising results. The performance of recognition rate in this research is 95% which shows a major improvement in comparison to the available methods.

Book Gesture Recognition

    Book Details:
  • Author : Sergio Escalera
  • Publisher : Springer
  • Release : 2017-07-19
  • ISBN : 3319570218
  • Pages : 583 pages

Download or read book Gesture Recognition written by Sergio Escalera and published by Springer. This book was released on 2017-07-19 with total page 583 pages. Available in PDF, EPUB and Kindle. Book excerpt: This book presents a selection of chapters, written by leading international researchers, related to the automatic analysis of gestures from still images and multi-modal RGB-Depth image sequences. It offers a comprehensive review of vision-based approaches for supervised gesture recognition methods that have been validated by various challenges. Several aspects of gesture recognition are reviewed, including data acquisition from different sources, feature extraction, learning, and recognition of gestures.

Book Real Time Single and Multi gesture Recognition Based on Skin Colour and Optical Flow

Download or read book Real Time Single and Multi gesture Recognition Based on Skin Colour and Optical Flow written by Muhammad Raza Ali and published by . This book was released on 2013 with total page pages. Available in PDF, EPUB and Kindle. Book excerpt: This thesis discusses our research conducted in the area of hand gesture recognition. The research objectives were to develop techniques that lead to accurate and robust gesture recognition under everyday settings. And work with consistent accuracy in both single and multiuser scenarios. In this research, we propose techniques that rely on the combination of skin colour and optical flow. A background subtraction stage involves identifying skin regions in an image frame. We use skin colour thresholds in chromaticity space. In our work, we have simplified the process by identifying a reliable set of thresholds without camera calibration and a specialized imaging setup. In order to tackle the issue of false positives we combine skin colour with optical flow magnitude i.e. joint thresholding. We propose a novel skin colour-optical flow metric to track an arbitrarily changing number of skin regions. The proposed technique has been successfully applied to Bayesian and non-Bayesian tracking. We use a novel feature descriptor to represent a gesture making hand i.e. the Radon transform of its contour. The tracking mechanism and gesture classification is tested for single and simultaneous multi-gesture classification. We also propose a novel technique for grouping skin regions belonging to a particular person. In our work, we first try to establish the potential usefulness of using standard HCI techniques by evaluating our real time application. Based on the results, we propose a usability evaluation framework. We formalize usability evaluation for interactive vision systems by incorporating the standard practice of prototyping and user feedback. This framework can be helpful in conducting a well rounded evaluation.

Book Dual sensor Approaches for Real time Robust Hand Gesture Recognition

Download or read book Dual sensor Approaches for Real time Robust Hand Gesture Recognition written by Kui Liu and published by . This book was released on 2015 with total page 198 pages. Available in PDF, EPUB and Kindle. Book excerpt: The use of hand gesture recognition has been steadily growing in various human-computer interaction applications. Under realistic operating conditions, it has been shown that hand gesture recognition systems exhibit recognition rate limitations when using a single sensor. Two dual-sensor approaches have thus been developed in this dissertation in order to improve the performance of hand gesture recognition under realistic operating conditions. The first approach involves the use of image pairs from a stereo camera setup by merging the image information from the left and right camera, while the second approach involves the use of a Kinect depth camera and an inertial sensor by fusing differing modality data within the framework of a hidden Markov model. The emphasis of this dissertation has been on system building and practical deployment. More specifically, the major contributions of the dissertation are: (a) improvement of hand gestures recognition rates when using a pair of images from a stereo camera compared to when using a single image by fusing the information from the left and right images in a complementary manner, and (b) improvement of hand gestures recognition rates when using a dual-modality sensor setup consisting of a Kinect depth camera and an inertial body sensor compared to the situations when each sensor is used individually on its own. Experimental results obtained indicate that the developed approaches generate higher recognition rates in different backgrounds and lighting conditions compared to the situations when an individual sensor is used. Both approaches are designed such that the entire recognition system runs in real-time on PC platform.