Gesture Recognition Technology
· Aid to physically challenged: People who are visually impaired or have some other complexity in their motor functions can take help of gesture based input devices so that there is no discomfort while they access computers. Also, these days machine wheel chairs are coming with gesture based systems. All that is required from the user in here is to lightly move hands on the panel at the arm rest of the wheel chair. The movements of the hands will act as a controller and speed as well as direction can be easily controlled.
“Have you seen the remote?” “I left it on the table after watching my matinee show”. “It is not here, I will miss the news again because of you!!!” In the near future, such heated discussions over remote control won’t disturb the harmony of the house. Not because they will place it correctly but because soon remote controls will be the objects of the past. Technology has finally reached that dimension when our hands will take over the job and replace them by directly communicating with the computer or television. For instance, in order to delete a folder or file from the computer, place your palm on it, and throw it like a paper in a dustbin. Even while using the microwave oven to bake a cake, waving our hands in the air like a magician would serve as a command for the oven. While some of us might be thinking of it being a futuristic vision, some of us have already experienced it through what we call “Gesture Recognition Technology”
Since the time that the computer revolution started, human computer interaction has always been attempted to improve. Computers have now become an integral part of our lives and hence their usage should be as trouble-free as talking to someone is. Earlier the way humans interacted with this smart machine was either through keyboard or a mouse. But now attempts are being made to make the man-machine interaction as natural as possible. Fulfilling this requirement is the popular touch screen technology which is soon expected to be replaced by the gesture recognition technology.
WHAT IS GESTURE?
As per Oxford Concise Dictionary 1995, gesture is defined as “a movement of a limb or the body as an expression of thought or feeling”. Similarly, Random House de?ned “gesture” as “the movement of the body, head, arms, hands, or face that expresses an idea, opinion, emotion, etc.”
Kurtenbach and Hulteen defined it as “A gesture is a motion of the body that contains information. Waving goodbye is a gesture. Pressing a key on a keyboard is not a gesture because the motion of a finger on its way to hitting a key is neither observed nor significant. All that matters is which key was pressed”.
Human gestures are undoubtedly natural. They may often prove more efficient and powerful as compared to various other modes of interaction. Tracking a head/hand or a body position or configuration may be quite valuable for controlling objects/systems or for feeding input parameters to the system. Gestures may also be used for expressing yourself. As an example, nodding may serve to communicate your consent or agreement, raising a finger may be a sign of your wish to interrupt, saying “huh” may indicate “I’m with you, continue”. Gesture recognition involves tracking of a human position, orientation or movement and finally interpretation of the same so as to recognize semantically consequential gestures.
Gestures and gesture recognition are terms increasingly encountered in discussions of human-computer interaction. Often, people use the term for character recognition, the recognition of proof readers symbols, shorthand, and various other forms of interaction. In fact every physical action involves a gesture of some sort in order to be articulated.
Gestures are communicative, meaningful body motions – i.e., physical movements of the fingers, hands, arms, head, face, or body with the objective to convey information or interact with the environment.
Cadoz described three functional roles of human gesture:
• Semiotic – to communicate meaningful information.
• Ergotic – to manipulate the environment.
• Epistemic – to discover the environment through tactile experience.
Gestures are used to convey information in variety of ways. An emotion of sadness can be conveyed through facial expression, a lowered head position, drooped shoulders, and lethargic movement. Similarly, a gesture to indicate “Stop!” can be communicated with the help of a raised hand with the palm facing forward, or an exaggerated waving of both hands above the head. Since there exists a many-to-one mapping from concept to gesture, gestures may often be ambiguous; at the same time, there also exists many-to-one mapping from gesture to concept and hence gestures are not completely specified. As speech and handwriting vary from one individual to other, gestures are also subjective. They vary among individuals and they vary from instance to instance for a particular individual
Though gestural communication is rich, it is equally complex. Researchers have differentiated them in different ways. Kendon described a “gesture continuum,” defining five different kinds of gestures:
• Gesticulation.
Spontaneous movements of the hands and arms that are accompanied by speech.
• Language-like gestures.
Gesticulation integrated into a speech, replacing a particular spoken word or phrase.
• Pantomimes.
Gestures that depict objects or actions, with or without accompanying speech.
• Emblems.
Gestures like “V for victory”, “thumbs up” and assorted rude gestures
• Sign languages.
Well defined Linguistic systems such as American Sign Language. Explained below are alphabets “A”, “C” and “F”.
Spontaneous gestures (gesticulation in Kendon’s Continuum) make up some 90% of human gestures. People make use of gestures even while talking on telephone, blind people commonly gestures while talking. Across cultures, speech-associated gesture is natural and common. Despite this, emblematic gestures and sign languages, although perhaps less spontaneous and natural, carry more clear semantic meaning and may be more appropriate for the kinds of command-and-control interaction.
CLASSIFICATION OF GESTURES
Gestures can be categorized to fit into the following application domain classifications:-
• Pre-emptive Gestures
A pre-emptive natural hand gesture occurs when the hand is moving towards a specific control (device/ appliance) and the detection of the hand approaching is used to pre-empt the operators intent to operate a particular control.
Examples of such gesture could include operation of the interior light; as the hand is detected approaching the light switch the light could switch on. If the hand is detected approaching the light switch again it would switch off, thus the hand movement to and from the device being controlled could be used as a pre-emptive gesture.
• Function Associated Gestures
Function Associated gestures are those gestures that use the natural action of the arm/hand/other body part to associate or provide a cognitive link to the function being controlled.
For example, moving the arm in a circles pivoted about the elbow towards the fan could be used to signify that the operators’ wish to switch on the fan. Such gestures have an action that can be associated with a particular function.
• Context Sensitive Gestures
Context Sensitive gestures are natural hand gestures that are used to respond to operator prompts or automatic events.
Possible context sensitive gestures to indicate yes/no or accept/reject could be a thumbs-up and a thumbs-down. These could be used to answer or reject an incoming phone call, an incoming voice message or an incoming SMS text message.
• Global Shortcut Gestures
Global shortcut gestures are in fact natural symbolic gestures that can be used at any time, the term natural refers to the use of natural hand gestures that are typically used in human to human communications. It is expected that hand gestures will be selected whereby the user can easily link the gesture to the function being controlled. Possible applications could include fairly frequently used controls that present unwanted high visual workload, such as phone dial home, phone dial work.
• Natural Dialogue Gestures
Natural dialogue hand gestures utilize natural gestures as used in human to human communication to initiate a gesture dialogue with the vehicle, typically this would involve two gestures being used although only one gesture at any given time.
For example if a person fanned his hand in front of his face, the gesture system could detect this and interpret that he is too hot and would like to cool down.
GESTURE SENSING TECHNOLOGIES
Gesture recognition is the process by which gestures made by the user are made known to the system. It can also be explained as the mathematical interpretation of a human motion by a computing device. Various types of gesture recognition technologies in use currently are:
· Contact type
It involves touch based gestures using a touch pad or a touch screen. Touch pad or touch screen based gesture recognition is achieved by sensing physical contact on a conventional touch pad or touch screen. Touch pads & touch screens are primarily used for controlling cursors on a PC or mobile phones and are gaining user acceptance for point of sale terminals, PDAs, various industrial and automotive applications as well. They are already being used for automotive applications, and PDAs. User acceptance of touch-based gesture automotive systems technologies are relatively easier for the public to accept because they preserve a physical user interface.
· Non-Contact
· Device Gesture Technologies
Device-based techniques use a glove, stylus, or other position tracker, whose movements send signals that the system uses to identify the gesture.
One of the commonly employed techniques for gesture recognition is to instrument the hand with a glove; the glove is equipped with a variety of sensors to provide information about hand position, orientation, and flex of fingers. First commercial hand tracker, Dataglove, used thin fiber optic cables running down the back of each hand, each with a small crack in it. Light is shone down the cable so when the fingers are bent light leaks out through the cracks. Measuring light loss gives an accurate reading of hand poses. Similar technique is used for wearable suits used in virtual environment applications. Though gloves provide accurate measurements of hand shape, they are cumbersome to wear, and connected through wires.
Various other kinds of systems are reported in literature for intrusive hand gesture recognition. Some uses bend sensor on the index finger, an acceleration sensor on the hand, a micro switch for activation.
Styli are interfaced with display technologies to record and interpret gestures like the writing of text.
To reduce physical restriction due to the cables, an alternate technique used is to wear an ultrasonic emitter on the index finger and the receiver capable of tracking the position of the emitter is mounted on a head mounted device (HMD).
To avoid placing sensors on the hand and fingers the "Gesture Wrist" uses capacitive sensors on a wristband to differentiate between two gestures (fist and point). Wearing a glove or suit is clearly not a practical proposition for many applications like automotives.
· Vision-based Technologies
There are two approaches to vision based gesture recognition;
Model based techniques:
They try to create a three dimensional model of the users hand and use this for recognition. Some systems track gesture movements through a set of critical positions. When a gesture moves through the same critical positions as does a stored gesture, the system recognizes it. Other systems track the body part being moved, compute the nature of the motion, and then determine the gesture. The systems generally do this by applying statistical modelling to a set of movements.
Image based methods:
Image-based techniques detect a gesture by capturing pictures of a user’s motions during the course of a gesture. The system sends these images to computer-vision software, which tracks them and identi?es the gesture.
These methods typically extract flesh tones from the background images to find hands and then try and extract features such as fingertips, hand edges, or gross hand geometry for use in gesture recognition.
· Electrical Field Sensing
Proximity of a human body or body part can be measured by sensing electric fields; the term used to refer to a family of non- contact measurements of the human body that may be made with slowly varying electric fields. These measurements can be used to measure the distance of a human hand or other body part from an object; this facilitates a vast range of applications for a wide range of industries.
Working of Gesture Recognition Technology
Gesture technology follows a few basic states to make the machine perform in the most optimized manner. These are:
1. Wait: In this state, the machine is waiting for the user to perform a gesture and provide an input to it.
2. Collect: After the gesture is being performed, the machine gathers the information conveyed by it.
3. Manipulate: In this state, the system has gathered enough data from the user or has been given an input. This state is like a processing state.
4. Execute: In this state, the system performs the task that has been asked by the user to do so through the gesture.
Devices that work on this technology usually follow these stages but their duration might vary from machine to machine depending on its configuration and the task it is supposed to do.
A basic working of the gesture recognition system can be understood from the following figure:
Applications of Gesture Recognition Technology
While the initial need of gesture recognition technology was only to improve the human computer interaction, it found plenty of applications as usage of computer went widespread. Currently, the following applications of gesture recognition technology are there:
· In Video Game Controllers: With the arrival of 6th generation video game consoles such as Microsoft X-Box with Kinect sensor, Sony PS3 with motion sensor controller, gesture recognition was widely implemented. In X-Box, often the user is the controller and has to perform all the physical movements that they desire the character in the game to do. For instance, one has to imitate kicking a football if he is playing football on any of the above listed gaming console. The Kinect sensor has a camera that catches the motions and processes it so that the character exactly does it.
In Sony PS3, users have to move the controller in such a way so that it imitates the action the user wants the character in the game to perform.
· Aid to physically challenged: People who are visually impaired or have some other complexity in their motor functions can take help of gesture based input devices so that there is no discomfort while they access computers. Also, these days machine wheel chairs are coming with gesture based systems. All that is required from the user in here is to lightly move hands on the panel at the arm rest of the wheel chair. The movements of the hands will act as a controller and speed as well as direction can be easily controlled.
Shown below is a typical example of gesture controlled wheel chair.
· Other Applications: Gesture recognition technology is gaining popularity in almost every area that utilizes smart machines. In aircraft traffic controls, this technology can aid in detailing every part of location information about the airplanes near to the airport. In cranes, this can be used instead of remotes so that easy picking and shedding of load can be load at difficult locations.
Smart TVs are nowadays coming with this technology making the user carefree about the remote and allowing him to use his hands for changing the channel or volume levels. Qualcomm has recently launched smart cameras and tablet computers that are based on this technology. The camera will recognize the proximity of the object before taking the picture and will adjust itself according to the requirements. The tablet computers with this technology will ease out the task where user has to give presentations or change songs on his juke box. He can browse all the data just by waving his hands around. Various touch screen smart phones are also incorporating this technology to provide easy access. Gesture recognition technology can also be used to make the robots understand the human gestures and make them work accordingly.
GESTURE RECOGNITION CHALLENGES
1. Latency
One of the key challenges in gesture recognition is that the image processing can be significantly slow creating unacceptable latency for video games and other similar applications.
2. Lack of Gesture Language
Since common gesture language is not there, different users make gestures differently. If users make gestures as they seem fit, gesture recognition systems would certainly have difficulty in identifying motions with the probabilistic methods currently in use.
3. Robustness
Many gesture recognition systems do not read motions accurately or optimally due to factors like insufficient background light, high background noise etc.
4. Performance
Image processing involved in gesture recognition is quite resource intensive and the applications may found difficult to run on resource constrained devices like PDA.
The rate of user acceptability of gesture recognition systems will be driven by how fast and wide spread gesture recognition becomes established and accepted in our everyday lives including other environments in which human interaction with machines takes place. These include interactions in the office, home, banking, gaming and other leisure activities.
During the next few years, Gesture recognition is most likely to be used primarily in niche applications because making mainstream applications work with the gesture recognition technology will take considerable effort than it’s worth.
0 comments:
Post a Comment