Work Paper On Gesture Recognition English Language Essay

We present the ongoing work paper on Gesture Recognition .Considerable effort has been made to provide a interface for Human Machine Interaction(HMI) that can be used to give alphabets as input to internet browser in an interactive set top box. Gesture recognition enables humans to interface with the machine (HMI) and interact naturally without any mechanical devices. Using the concept of gesture recognition, it is possible to point a finger at the computer screen so that the cursor will move accordingly. This could potentially make conventional input devices such as mouse, keyboards and even touch- screens redundant. Gesture recognition pertains to recognizing meaningful expressions of motion by a human, involving the hands, arms, face, head, and/or body. It is of utmost importance in designing an intelligent and efficient human-

computer interface. The applications of gesture recognition are manifold, ranging from sign language through medical rehabilitation to virtual reality. In this paper, we provide gesture recognition with particular emphasis on hand gestures and facial expressions.

INTRODUCTION

Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical

algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current focuses in the field include emotion recognition from the face and hand gesture recognition. Many approaches have been made using cameras and computer vision algorithms to interpret sign language. However, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques.

Gesture recognition can be seen as a way for computers to begin to understand human body language, thus building a richer bridge between machines and humans than primitive text user interfaces or even GUIs (graphical user interfaces), which still limit the majority of input to keyboard and mouse.

Gesture recognition can be conducted with techniques from computer vision and image processing.

GESTURE TYPES

In computer interfaces, two types of gestures are distinguished:

Offline gestures: Those gestures that are processed after the user interaction with the object. An example is the gesture to activate a menu.

Online gestures: Direct manipulation gestures. They are used to scale or rotate a tangible object.

USES OF GESTURE RECOGNITION

Gesture recognition is useful for processing information from humans which is not conveyed through speech or type. As well, there are various types of gestures which can be identified by computers.

• Sign language recognition. Just as speech recognition can transcribe speech to text, certain types of gesture recognition software can transcribe the symbols represented through sign language into text.

• For socially assistive robotics. By using proper sensors (accelerometers and gyros) worn on the body of a patient and by reading the values from those sensors, robots can assist in patient rehabilitation. The best example can be stroke rehabilitation.

• Directional indication through pointing. Pointing has a very specific

purpose in our[clarification needed] society,

to reference an object or location based on its position relative to ourselves. The use of gesture recognition to determine where a person is pointing is useful for identifying the context of statements or instructions. This application is of particular interest in the field of robotics.

• Control through facial gestures.

Controlling a computer through facial gestures is a useful application of gesture recognition for users who may not physically be able to use a mouse or keyboard. Eye tracking in particular may be of use for controlling cursor motion or focusing on elements of a display.

• Alternative computer interfaces.

Foregoing the traditional keyboard and mouse setup to interact with a computer, strong gesture recognition could allow users to accomplish frequent or common tasks using hand or face gestures to a camera.

• Immersive game technology.

Gestures can be used to control interactions within video games to try and make the game player’s experience more interactive or immersive.

Read also  Curbing crime against women

• Virtual controllers. For systems where the act of finding or acquiring a physical controller could require too much time, gestures can be used as an alternative control mechanism. Controlling secondary devices in a car, or controlling a television set are examples of such usage.

• Affective computing. In affective computing, gesture recognition is used in the process of identifying emotional expression through computer systems.

Remote control. Through the use of gesture recognition, “remote control with the wave of a hand” of various devices is possible. The signal must not only indicate the desired response, but also which device to be controlled.

INPUT DEVICES

The ability to track a person’s movements and determine what gestures they may be performing can be achieved through various tools. Although there is a large amount of research done in image/video based gesture recognition, there is some variation within the tools and environments used between implementations.

Stereo cameras. Using two cameras whose relations to one another are known, a 3d representation can be approximated by the output of the cameras. To get the cameras’ relations, one can use a positioning reference such as a lexian-stripe or infrared emitters.[19] In combination with direct motion measurement (6D-Vision) gestures can directly be detected.

• Controller-based gestures. These controllers act as an extension of the body so that when gestures are performed, some of their motion can be conveniently captured by software. Mouse gestures are one such example, where the motion of the mouse is correlated to a symbol being drawn by a person’s hand, as is the Wii Remote, which can study changes in acceleration over time to represent gestures.

Single camera. A normal camera can be used for gesture recognition where the resources/environment would not be convenient for other forms of image-based recognition. Although not necessarily as effective as stereo or depth aware cameras, using a single camera allows a greater possibility of accessibility to a wider audience.

CHALLENGES

There are many challenges associated with the accuracy and usefulness of gesture recognition software. For image-based gesture recognition there are limitations on the equipment used and image noise. Images or video may not be under consistent lighting, or in the same location. Items in the background or distinct features of the users may make recognition more difficult.

The variety of implementations for image-based gesture recognition may also cause issue for viability of the technology to general usage. For example, an algorithm calibrated for one camera may not work for a different camera. The amount of background noise also causes tracking and recognition difficulties, especially when occlusions (partial and full) occur. Furthermore, the distance from the camera, and the camera’s resolution and quality, also cause variations in recognition accuracy.

In order to capture human gestures by visual sensors, robust computer vision methods are also required, for example for hand tracking and hand posture recognition or for capturing movements of the head, facial expressions or gaze direction.

GESTURES AND POSTURE

Ways of exploring and analysing the largely unconscious world of communication through gestures and postures.

WHY? Perhaps the most fundamental form of visual communication – indeed of all communication – is body language. This is a language which we have all learnt to speak and understand and yet it is so fundamental that we are often not conscious of it. The way we carry ourselves, the gestures we use and our facial expressions all communicate much more than we realise. No analysis of communication practices and power can be complete without giving some space for reflection on this.

WHEN? Facilitators should be aware of the basic signals from participants gestures and postures from the start – as this will help them to identify ways of making people more comfortable or involved when their body language shows detachment. It is something that might be explored with all participants at any stage.

Read also  Is Use Of Language Restricted To Humans English Language Essay

HOW? There are many dangers in exploring gestures and postures in a Reflect

process. The last thing we need is for people to be taught how to comport themselves properly as if this was some kind of social finishing school which teaches people how to behave. However, at the same time it is clear that this should be a legitimate area for analysis and reflection – and that it can give people new insights into both themselves and others that might be helpful for addressing power relationships.

MAPPING POSTURES: An easy place to start with this discussion is to ask the group to identify different postures that communicate clear meanings to them. People can be asked to exaggerate at first to make their point clear. Participants could take it in turns to adopt a posture with others guessing the intention or describing how they interpret it. This can be done with different basic positions -for example getting people to show different ways of sitting that send different messages to others -and then later different ways of standing. This can be extended by ask participants to adopt different postures in a simulated situation – such as at a community assembly or at a party. It is particularly interesting to overlay a power analysis on each posture identified – what does this posture say about this person’s status and power in this situation.

MAPPING GESTURES: A similar process can be used to map out gestures – identifying as many different ways of using hands to communicate meaning – and again exploring the power dimensions of different gestures. As people practice doing this, more subtle gestures will be identified.

MAPPING FACIAL EXPRESSIONS: a similar process again can be done with facial expressions – trying to identify smaller and smaller changes. This process can involve a struggle to find the right language to distinguish differences.

POWER PAIRING OF GESTURES OR POSTURES: The power (or lack of power) of some postures or gestures is difficult to read alone. So, asking people in pairs to create a power tableau, conscious of gesture, posture and facial expression can add a new dimension to this analysis. Pair work can also explore how gesture and posture affect others eg in pairs asking one person to talk and the other to gaze around the room avoiding eye contact? How does that affect the talker?

BODY SCULPTURES: These can be done in various ways. One option is to have one or two sculptors who shape everyone else to build a composite image. This was done by ODEC in Oxford to explore racism within its own organisation. One person was asked to silently (or as silently as possible) sculpt the bodies of each person – including their facial expressions – to capture a specific dimension of racism. An alternative is for everyone to participate in constructing a composite image of something. For example in a Reflect workshop in Pakistan groups were asked to build sculptures of different social issues – capturing, for example, the feudal order in rural areas.

Silence is in many respects the ultimate in communication. It can be used actively and subversively. It can scream louder than the loudest voice. It can be a complete inversion of the “culture of silence” as in the case of Susan, a Reflect participant in Uganda:

“Susan would fall silent when she wanted to hint at something without saying it. If, when asked about corruption or abuse of power by officials, she had actually spoken she would have been left open to counter-attack or revenge from officials. Silence in such contexts can actually add to the credibility of her unspoken accusation. In other situations Susan’s silences were openly disrespectful, aggressive silences which often succeeded in stopping a shouting adversary in his tracks.”

Read also  The Monsters of Life

In reading body language, you sometimes get very clear signals about what a client is thinking. In fact, our thoughts and feelings creep out in our postures and gestures. Here’s an example.

At a meeting, and uncomfortable subject was raised. One of the women at the meeting clearly did not want to discuss the issue. Anyone who could read non-verbal communication could see this in an instant. How? The woman was wearing a turtleneck sweater. As the discussion began, she started to pull her sweater up. Slowly she pulled it over her chin. Soon her turtleneck sweater was covering her mouth. The woman did not realize that her body language was giving away her reluctance to talk about this subject!

Gestures, postures and head movements all reveal what we are thinking and feeling. Here are 10 things to watch for:

People who are angry may lean forward with a tight facial expression and fists clenched.

Someone who is excited will exhibit an open body position with palms up, and mouth and eyes wide open.

People who feel shy will look down, make little eye contact, and may appear to shrink to one side.

Someone who wants to intimidate others may appear threatening by taking an upright stance and standing close to you.

People who stand with their hands on their hips and their elbows turned out are showing a posture of superiority and dominance.

People often try to show their superiority by sitting with their legs in the four-cross position, with the ankle of one leg resting on the knee of the other, and the elbows outstretched and hands clasped behind the neck or head. In male body language, two executives may both unconsciously adopt this posture to maintain their respective positions of authority.

Someone who crosses his arms, hunches his back or clenches his fist, even if he is unaware of what he is doing, can be showing defensiveness, or even be feeling hostile.

People who are interested will prop their heads up with a hand, with the index finger pointing to the cheek.

A buyer may reveal interest by leaning forward, indicating he is about to act on a suggestion.

A customer who leans back is indicating indifference, or a lack of interest.

CONCLUSION

Interface with computers using gestures of the human body, typically hand movements. In gesture recognition technology, a camera reads the movements of the human body and communicates the data to a computer that uses the gestures as input to control devices or applications. For example, a person clapping his hands together in front of a camera can produce the sound of cymbals being crashed together when the gesture is fed through a computer.

One way gesture recognition is being used is to help the physically impaired to interact with computers, such as interpreting sign language. The technology also has the potential to change the way users interact with computers by eliminating input devices such as joysticks, mice and keyboards and allowing the

unencumbered body to give signals to the computer through gestures such as finger pointing.

In addition to hand and body movement, gesture recognition technology also can be used to read facial and speech expressions (i.e., lip reading), and eye movemeNTS.

Way to interact with computers without any mechanical devices.

Illiterate person can also operate a computer system. More effective and friendly for human computer interaction .Based on the concept of artificial intelligence.

Can be used for advertising. New emerging technology

Order Now

Order Now

Type of Paper
Subject
Deadline
Number of Pages
(275 words)