Machine Vision

Bring Your Avatar to Life

3D Sensors for Interactive Human-Machine Interfaces

06.12.2010 -

An important presentation: How pleasant would it be to operate the notebook interactively and contact-free, just with pre-defined gestures. What sounds like a futuristic vision is already possible today. 3D image sensors not only detect the user's hands, they also capture human body poses if required. This ability enables completely new interfaces.

3D image sensors from the company PMD-Technologies deliver a 2D gray-value image as well as a 3D depth map. With frame rates of up to 100Hz, the time-of-flight (TOF) sensors detect human body poses. The latest generation of these full-body tracking systems has been presented at the PMD Vision Day on November, 18, 2010 in Munich together with the software company Omek Interactive from Israel. This latest generation enables the gamer to move freely in front of the PMD camera. Controllers or peripheral devices are not needed anymore. The PMD camera provides a depth map of the gamer; every pixel represents a distance value. The tracking software from Omek Interactive utilizes the camera data to identify different body parts and movements in real-time. The movements of the gamer are projected seamlessly on the virtual character - what he is doing, is done in an instant also by his avatar.

Interaction with the Hands
Nevertheless, the detection of the full body human pose is not the only way to enable a touchless interaction. For a wide field of applications the interaction does not have to be accomplished with the whole body but with the hands. Such applications are for example menu navigation of mobile phones, touchless interaction with a notebook during presentations or in general the usage of interactive control displays when touching a display is difficult due to dirt or even impossible due to the risk of contamination.
The proof of concept to integrate technology into such small devices as laptops or handhelds has been exemplarily verified with the PMD[vision] CamBoard. This prototype demonstrates that by using 2D/3D vision the applications pointed out above can be addressed with a web-cam-sized system which is fully USB-powered.

The Method's Advantages
The detection of hand gestures using the gray-value and depth data available from the PMD[vision] CamBoard provides several robust features compared to 2D image based approaches. The depth map measuring is independent from environmental illumination conditions. This means that the 3D data is available in darkness as well as in outdoor conditions. In addition, the rough segmentation of a hand can be accomplished based on the depth data and consequently no texture information is needed to identify the hand. Thus, hands can be detected even when they are covered by a glove or for strongly varying skin colors.

Detecting the Absence
Nevertheless, detecting the hand is only one part of the solution. The absence of a hand performing a certain gesture has to be detected reliably, too. Otherwise, unintended interactions may occur since a hand gesture would be detected although the user did not intend to perform an interaction. Since unintended usage is likely to have a severe effect on the user acceptance it is important to address this issue when designing a human-machine-interface. This issue can be addressed by PMD-based 2D/3D sensors, too: If an object has been detected as a potential hand which should trigger a certain interaction, the analysis of certain metrical features like the length of the fingers or the width of the palm provides a countercheck whether the detected object really is a hand.

Realization in Three Steps
An exemplary human-machine-interface is described in the following. A hand with its palm facing the camera and four outstretched fingers shall be detected as the interacting hand. That means if this hand gesture is detected the system shall enable certain interactions (which can be defined later). If such a hand is not detected the system shall disable all interactions.
The processing pipeline is composed of the following steps:

  • Segmentation of the hand: Using two distance thresholds to define a minimum and a maximum interaction distance from the camera and additionally exploiting certain anatomical features of the hand, the hand is segmented.
  • Validation of shape features: By analyzing the shape of the segmented hand, the finger tips, the contour of the palm and the center of the palm are determined. If no four finger tips are detected the hand is not in a valid interaction pose.
  • Validation of metrical dimensions: Using the information of the previous step about finger tip positions and contour of the palm, the position of the root finger joints is determined. Using the unique feature of PMD sensors that 3D coordinates are available for every pixel the 3D length of each detected finger (the distance from root finger joint to corresponding finger tip) is computed and it is checked whether the length of the finger is in a reasonable range for fingers of a human hand.

Summarizing: Steps 1 and 2 verify whether the shape of the detected object is in accordance with the shape of an interacting hand. Step 3 verifies that the detailed metrical dimensions of the object are in accordance with those of an interacting hand.

Setting Gestures
Based on the capability to reliably detect the presence or absence of an interacting hand, an HMI providing mouse cursor movement, click and drag-and-drop functionality can be designed in a straight-forward manner:

  • Mouse cursor movement: The field of view of the PMD[vision] CamBoard is partitioned into 3 x 3 segments. Depending on the segment in which the interacting hand was detected a certain mouse movement is triggered. The central segment triggers no mouse movement.
  • Switching between interaction modes: The HMI differentiates two interaction modes: A click mode and a drag-and-drop mode. Holding the hand still in the central segment of the field of view switches from click mode to drag-and-drop mode or vice versa.
  • Click: If the HMI is in click mode and the interacting hand is closed and opened a left mouse click is triggered.
  • Drag-and-drop: If the HMI is in drag-and-drop mode, the first closing and opening of the hand triggers the pressing of the left mouse button; the second closing and opening of the hand triggers the release of the left mouse button. This effectively emulates a drag-and-drop interaction.

A 2D/3D system in the size of a webcam ensures the detection of these gestures with PMD sensors. The small form factor and the power supply over USB without any additional cables prepares the technology for the consumer market. While more than one year ago, a similar functionality was realizable (see Minority Report - Futuristic Interface-Technology by 3D Image Processing, INSPECT 6-7/2009), the required PMD[vision] CamCube was far larger and not as easy to use. As the consumer industry is known to be a driving force for faster and lower-cost solution, it will be exciting to observe how this fact will change the cooperation between human and robot in the industry.

Contact

PMD Technologies AG

Am Eichenhang 50
57076 Siegen
Germany

+49 (0) 271 238538 800
+49 (0) 271 238538 809

Digital tools or software can ease your life as a photonics professional by either helping you with your system design or during the manufacturing process or when purchasing components. Check out our compilation:

Proceed to our dossier

inspect award 2024

Submit your product now for the inspect award 2024

Submit your product now!

Digital tools or software can ease your life as a photonics professional by either helping you with your system design or during the manufacturing process or when purchasing components. Check out our compilation:

Proceed to our dossier

inspect award 2024

Submit your product now for the inspect award 2024

Submit your product now!