Developers: | |
Date of the premiere of the system: | August, 2019 |
Technology: | Cybersecurity - Biometric identification |
2019: Release of software
In the middle of August, 2019 Google offered developers new software on the basis of machine learning for exact tracking of gestures by smartphones. The source code and the end-to-end scenario are laid out on the GitHub portal.
Google provided Hand Tracking technology at a conference on computer vision and image identification in June, 2019 and soon implemented it in a cross-platform framework of MediaPipe for the applied machine learning working with such tasks as face recognition and detection of objects.
Google provided technology on the basis of three AI models working in a tandem. The network is focused on coordinates of 21 points of a palm for what the palm detector - BlazePalm - analyzes a frame and receives the limiting rectangle, the model of a reference point of a hand estimates the selected area and defines three-dimensional points of a hand, and the recognizer of gestures looks for earlier calculated configuration of points in a set of gestures.[1]
Researchers note that correctly certain shape of a palm allows to accelerate considerably process of further recognition of gestures, so, to reduce dimension of data. It allows to use this algorithm on the smartphone, but not in a cloud. Training of an algorithm required 30,000 real images with the coordinates marked manually and also the high-quality synthetic model of a hand drawn on different backgrounds and compared with the corresponding coordinates.
Capability of the computer to perceive a form and the movement of hands can improve its user interaction in different technology areas. Scientists of Google hoped that transfer of this technique to wider community of researchers and developers will lead to emergence of creative options of use, stimulates development of new applications and the new directions of researches.[2]