2018年11月7日星期三

Gesture recognition

Gesture recognition is a topic in computer science and language technology with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current focuses in the field include emotion recognition from face and hand gesture recognition. Users can use simple gestures to control or interact with devices without physically touching them. Many approaches have been made using cameras and computer vision algorithms to interpret sign language. However, the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques. Gesture recognition can be seen as a way for computers to begin to understand human body language, thus building a richer bridge between machines and humans than primitive text user interfaces or even GUIs (graphical user interfaces), which still limit the majority of input to keyboard and mouse.

Gesture recognition enables humans to communicate with the machine (HMI) and interact naturally without any mechanical devices. Using the concept of gesture recognition, it is possible to point a finger at the computer screen so that the cursor will move accordingly. This could make conventional input devices such as mouse, keyboards and even touch-screens redundant.

Definition
With respect to human-computer interaction, Kurtenbach and Hulteen define a gesture as follows: "A gesture is a motion of the body that contains information. Waving goodbye is a gesture. Pressing a key on a keyboard is not a gesture because the motion on a finger is not observed nor significant. All that matters is which key was pressed. In contrast, Harling and Edwards waive the requirement for movement and understand by a gesture and static hand postures. It can be distinguished between systems in which the sensors necessary for the detection are located directly on the body of the user, and those in which the user is observed by external sensors.

Gesture Recognition Features:

More accurate
High stability
Time saving to unlock a device

The major application areas of gesture recognition in the current scenario are:

Automotive sector
Consumer electronics sector
Transit sector
Gaming sector
To unlock smartphones
Defence
Home automation
Sign language interpretation

Gesture recognition technology has been considered to be the highly successful technology as it saves time to unlock any device.

Gesture recognition can be conducted with techniques from computer vision and image processing.

The literature includes ongoing work in the computer vision field on capturing gestures or more general human pose and movements by cameras connected to a computer.

Gesture recognition and pen computing: Pen computing reduces the hardware impact of a system and also increases the range of physical world objects usable for control beyond traditional digital objects like keyboards and mice. Such implementations could enable a new range of hardware that does not require monitors. This idea may lead to the creation of holographic display. The term gesture recognition has been used to refer more narrowly to non-text-input handwriting symbols, such as inking on a graphics tablet, multi-touch gestures, and mouse gesture recognition. This is computer interaction through the drawing of symbols with a pointing device cursor.

Gesture types
In computer interfaces, two types of gestures are distinguished: We consider online gestures, which can also be regarded as direct manipulations like scaling and rotating. In contrast, offline gestures are usually processed after the interaction is finished; e. g. a circle is drawn to activate a context menu.

Offline gestures: Those gestures that are processed after the user interaction with the object. An example is the gesture to activate a menu.
Online gestures: Direct manipulation gestures. They are used to scale or rotate a tangible object.
Touchless interface
Touchless user interface is an emerging type of technology in relation to gesture control. Touchless user interface (TUI) is the process of commanding the computer via body motion and gestures without touching a keyboard, mouse, or screen. For example, Microsoft's Kinect is a touchless game interface; however, products such as the Wii are not considered entirely touchless because they are tethered to controllers. Touchless interface in addition to gesture controls are becoming widely popular as they provide the abilities to interact with devices without physically touching them.

Device-based gesture recognition
Most systems based worn on the body or guided by hand sensor use in data gloves integrated acceleration or position sensors. The disadvantage of data glove-based systems is that the user must wear the glove to use the system.

Hand-guided systems, such as the Nintendo Wii controller and BlueWand, manufactured by BeeCon, can also be used for gesture input. Both systems can be handled by the user and have acceleration sensors to detect the movement of each device.

For newer devices such as smartphones and tablet computers, especially touchscreens are used, which can be used by "swipe gestures". In particular, multi-touch screens offer the detection of several independent fingerprints simultaneously, so that, for example, with two diagonally attached fingertips, windows can be made larger or smaller.

Camera-based gesture recognition
Systems with external sensors are mostly camera-based systems. The cameras are used to take pictures of the user. There are both systems with a camera and with multiple cameras, the newer systems often work with 3D data that works either on time-of-flight cameras or so-called structured light cameras. Camera-based techniques rely on 2D and 3D image analysis techniques to detect the user's posture. Camera-based gesture recognition is used, for example, in games for the EyeToy connectable to game consoles. A completely new approach is gesture control via stereoscopy, The advantage of this is that it works without infrared light and thus works outdoors.

In technical image analysis, there are basically several approaches to be distinguished: either a database is created with relevant gestures created on the basis of a meridian of more than 1,000 video analyzes per gesture. Recorded control gestures are then compared to the database and determined accordingly. For example, this solution is used by Microsoft with the Xbox in conjunction with the Kinect 3D camera. The analysis can be performed in two-dimensional space using image and video information. In three-dimensional space one speaks of volumetric calculation, for example, bodies are represented by NURBS or polygons. A calculation of real-time 3D data is currently under development. The disadvantage of this database-based analysis is that it requires a lot of computing power from the database. Alternatively, the software works with a true skeleton identifier, i. H. From the camera data body, hand and / or fingers are recognized and assigned to the predefined gestures by a simplified skeleton model. This solution promises a much greater variety of gestures and precision, but is technically much more demanding.

The goal of research and development in the coming years is to implement gesture recognition in the context of embedded software, which is platform- and camera-independent and requires little energy, and can therefore also be used for example in mobile telephones, tablets or navigation systems.

In 2012, a number of commercial vendors announced that they wanted to get to market with gesture recognition devices that should be significantly better than currently available devices (especially the Kinect for the Xbox). For example, Samsung haspresented the Smart TV at CES 2012 in Las Vegas. Another company is LeapMotion, where the promotional video for The Leap was criticized in the community, as some obvious scenes were recorded. In Germany, gesture control is a particular topic in the automotive industry, where particularly stable and mobile systems are required, such as those manufactured by gestigon, which also work on an embedded solution. 3D gesture recognition is also popular in the fields of digital signage, media technology, media art and performance. An easy way to use gesture recognition in these areas and z. For example, controlling other software is Kinetic Space. Other manufacturers include Omek, Softkinetic and Myestro Interactive.

Types of touchless technology
There are a number of devices utilizing this type of interface such as, smartphones, laptops, games, and television. Although touchless technology is mostly seen in gaming software, interest is now spreading to other fields including, automotive and healthcare industries. Soon to come, touchless technology and gesture control will be implemented in cars in levels beyond voice recognition. See BMW Series 7.

Future of touchless technology
There are already a vast number of companies all over the world who are producing gesture recognition technology, such as:

Intel Corp.
White Paper: Explore Intel's user experience research, which shows how touchless multifactor authentication (MFA) can help healthcare organizations mitigate security risks while improving clinician efficiency, convenience, and patient care. This touchless MFA solution combines facial recognition and device recognition capabilities for two-factor user authentication.

Microsoft Corp. in the U.S.
The aim of the project then is to explore the use of touchless interaction within surgical settings, allowing images to be viewed, controlled and manipulated without contact through the use of camera-based gesture recognition technology. In particular, the project seeks to understand the challenges of these environments for the design and deployment of such systems, as well as articulate the ways in which these technologies may alter surgical practice. While our primary concerns here are with maintaining conditions of asepsis, the use of these touchless gesture-based technologies offers other potential uses.

Elliptic Labs
Elliptic Labs software suite delivers gesture and proximity functions by re-using the existing earpiece and microphone, previously used only for audio. Ultrasound signals sent through the air from speakers integrated in smartphones and tablets bounce against a hand/object/head and are recorded by microphones, also integrated in these devices. In this way, Elliptic Labs' technology recognizes your hand gestures and uses them to move objects on a screen, similarly to the way bats use echolocation to navigate.

While these companies stand at the forefront of touchless technology for the future in this time, there are many other companies and products that are currently trending as well and may also add value to this new field. Here are some of many examples:

Tobii Rex: eye-tracking device from Sweden

Airwriting: technology that allows messages and texts to be written in the air

eyeSight: allows for navigation of a screen without physically touching the device

Leap Motion: motion sensor device

Myoelectric Armband: allows for communication of bluetooth devices

Input devices
The ability to track a person's movements and determine what gestures they may be performing can be achieved through various tools. The kinetic user interfaces (KUIs) are an emerging type of user interfaces that allow users to interact with computing devices through the motion of objects and bodies. Examples of KUIs include tangible user interfaces and motion-aware games such as Wii and Microsoft's Kinect,and other interactive projects.

Although there is a large amount of research done in image/video based gesture recognition, there is some variation within the tools and environments used between implementations.

Wired gloves. These can provide input to the computer about the position and rotation of the hands using magnetic or inertial tracking devices. Furthermore, some gloves can detect finger bending with a high degree of accuracy (5-10 degrees), or even provide haptic feedback to the user, which is a simulation of the sense of touch. The first commercially available hand-tracking glove-type device was the DataGlove, a glove-type device which could detect hand position, movement and finger bending. This uses fiber optic cables running down the back of the hand. Light pulses are created and when the fingers are bent, light leaks through small cracks and the loss is registered, giving an approximation of the hand pose.
Depth-aware cameras. Using specialized cameras such as structured light or time-of-flight cameras, one can generate a depth map of what is being seen through the camera at a short range, and use this data to approximate a 3d representation of what is being seen. These can be effective for detection of hand gestures due to their short range capabilities.
Stereo cameras. Using two cameras whose relations to one another are known, a 3d representation can be approximated by the output of the cameras. To get the cameras' relations, one can use a positioning reference such as a lexian-stripe or infrared emitters. In combination with direct motion measurement (6D-Vision) gestures can directly be detected.
Gesture-based controllers. These controllers act as an extension of the body so that when gestures are performed, some of their motion can be conveniently captured by software. An example of emerging gesture-based motion capture is through skeletal hand tracking, which is being developed for virtual reality and augmented reality applications. An example of this technology is shown by tracking companies uSens and Gestigon, which allow users to interact with their surrounding without controllers.

Another example of this is mouse gesture trackings, where the motion of the mouse is correlated to a symbol being drawn by a person's hand, as is the Wii Remote or the Myo armband or the mForce Wizard wristband, which can study changes in acceleration over time to represent gestures. Devices such as the LG Electronics Magic Wand, the Loop and the Scoop use Hillcrest Labs' Freespace technology, which uses MEMS accelerometers, gyroscopes and other sensors to translate gestures into cursor movement. The software also compensates for human tremor and inadvertent movement. AudioCubes are another example. The sensors of these smart light emitting cubes can be used to sense hands and fingers as well as other objects nearby, and can be used to process data. Most applications are in music and sound synthesis, but can be applied to other fields.

Single camera. A standard 2D camera can be used for gesture recognition where the resources/environment would not be convenient for other forms of image-based recognition. Earlier it was thought that single camera may not be as effective as stereo or depth aware cameras, but some companies are challenging this theory. Software-based gesture recognition technology using a standard 2D camera that can detect robust hand gestures.
Radar. See Project Soli revealed at Google I/O 2015. starting at 13:30, Google I/O 2015 – A little badass. Beautiful. Tech and human. Work and love. ATAP. - YouTube, and a short introduction video, Welcome to Project Soli – YouTube

Algorithms
Depending on the type of the input data, the approach for interpreting a gesture could be done in different ways. However, most of the techniques rely on key pointers represented in a 3D coordinate system. Based on the relative motion of these, the gesture can be detected with a high accuracy, depending on the quality of the input and the algorithm’s approach.
In order to interpret movements of the body, one has to classify them according to common properties and the message the movements may express. For example, in sign language each gesture represents a word or phrase. The taxonomy that seems very appropriate for Human-Computer Interaction has been proposed by Quek in "Toward a Vision-Based Hand Gesture Interface". He presents several interactive gesture systems in order to capture the whole space of the gestures:

Manipulative
Semaphoric
Conversational

Some literature differentiates 2 different approaches in gesture recognition: a 3D model based and an appearance-based. The foremost method makes use of 3D information of key elements of the body parts in order to obtain several important parameters, like palm position or joint angles. On the other hand, Appearance-based systems use images or videos for direct interpretation.

3D model-based algorithms
The 3D model approach can use volumetric or skeletal models, or even a combination of the two. Volumetric approaches have been heavily used in computer animation industry and for computer vision purposes. The models are generally created from complicated 3D surfaces, like NURBS or polygon meshes.

The drawback of this method is that is very computational intensive, and systems for real time analysis are still to be developed. For the moment, a more interesting approach would be to map simple primitive objects to the person’s most important body parts (for example cylinders for the arms and neck, sphere for the head) and analyse the way these interact with each other. Furthermore, some abstract structures like super-quadrics and generalised cylinders may be even more suitable for approximating the body parts. The exciting thing about this approach is that the parameters for these objects are quite simple. In order to better model the relation between these, we make use of constraints and hierarchies between our objects.

Skeletal-based algorithms
Instead of using intensive processing of the 3D models and dealing with a lot of parameters, one can just use a simplified version of joint angle parameters along with segment lengths. This is known as a skeletal representation of the body, where a virtual skeleton of the person is computed and parts of the body are mapped to certain segments. The analysis here is done using the position and orientation of these segments and the relation between each one of them(for example the angle between the joints and the relative position or orientation)

Advantages of using skeletal models:

Algorithms are faster because only key parameters are analyzed.
Pattern matching against a template database is possible
Using key points allows the detection program to focus on the significant parts of the body

Appearance-based models
These models don’t use a spatial representation of the body anymore, because they derive the parameters directly from the images or videos using a template database. Some are based on the deformable 2D templates of the human parts of the body, particularly hands. Deformable templates are sets of points on the outline of an object, used as interpolation nodes for the object’s outline approximation. One of the simplest interpolation function is linear, which performs an average shape from point sets, point variability parameters and external deformators. These template-based models are mostly used for hand-tracking, but could also be of use for simple gesture classification.

A second approach in gesture detecting using appearance-based models uses image sequences as gesture templates. Parameters for this method are either the images themselves, or certain features derived from these. Most of the time, only one (monoscopic) or two (stereoscopic) views are used.

Challenges
There are many challenges associated with the accuracy and usefulness of gesture recognition software. For image-based gesture recognition there are limitations on the equipment used and image noise. Images or video may not be under consistent lighting, or in the same location. Items in the background or distinct features of the users may make recognition more difficult.

The variety of implementations for image-based gesture recognition may also cause issue for viability of the technology to general usage. For example, an algorithm calibrated for one camera may not work for a different camera. The amount of background noise also causes tracking and recognition difficulties, especially when occlusions (partial and full) occur. Furthermore, the distance from the camera, and the camera's resolution and quality, also cause variations in recognition accuracy.

In order to capture human gestures by visual sensors, robust computer vision methods are also required, for example for hand tracking and hand posture recognition or for capturing movements of the head, facial expressions or gaze direction.

"Gorilla arm"
"Gorilla arm" was a side-effect of vertically oriented touch-screen or light-pen use. In periods of prolonged use, users' arms began to feel fatigue and/or discomfort. This effect contributed to the decline of touch-screen input despite initial popularity in the 1980s.

In order to measure arm fatigue and the gorilla arm side effect, researchers developed a technique called Consumed Endurance.

Source from Wikipedia

没有评论:

发表评论

Objective abstraction

Objective abstraction was a British art movement. Between 1933 and 1936 several artists later associated with the Euston Road School produce...