Unveils Microsoft gesture tracking technology Future VR games will no longer use handles

In addition to the disadvantages of gesture control, one of the most embarrassing things about virtual reality is that you can't see your hand. When you put on a virtual reality device, you are in a seemingly incomparable real world and you feel like you are there, but when you reach out and try to grab something in front of you or want to reach something, there is Nothing exists, and this creates an atmosphere that will instantly be broken.

A Microsoft development team hopes to use Handpose gesture tracking technology to help humans interact more naturally with virtual environments, and is not limited to VR or AR domains, or even humans interacting with computers or other connected devices.

David Sweeney, a design technician at the Microsoft Research Institute in Cambridge, England, told reporters: “The reality starts with our interaction with the physical world, and the main way of interaction is our hands. This is the cognitive we are used to, and we don’t even think about it. This action is all about human intuition."

The Handpose project began in 2014. The computer vision of this project was unveiled earlier this year. As the project is still in the research phase, Microsoft invites reporters to experience it on site.

When the reporter moves his hands in the real world, he can see the synchronized hands on the screen in front of him, and he can get the virtual objects created by the Unity engine through real-world grabbing. Sweeney concludes: "This project maps out two hands by getting real-world depth information." This is similar to Leap Motion.

The Kinect sensor captures depth information, and the Handpose software converts the information into data lattices, thereby restoring the mesh model of the hand (a variety of models are available to accommodate different sizes of hands). Essentially, each data point is positioned on the surface of the hand model, so it can simulate the exact same movement as your real hands. It also converts simulated motion into actual motion through machine learning.

When reporters used the technology, they encountered some very serious mistakes, such as when there was more than one person in front of the sensor or when the reporter tried to get his hands close. But in normal use, every detail of the hand is perfectly captured and mapped on the screen. There is a major problem with gesture tracking here: our hands consist of many movable parts, and every small movement should be considered a natural movement.

This technology makes VR more immersive. In the video, the reporter uses the Oculus Rift headset and Handpose technology to see his hands poke a rabbit and collapse it like a clay, similar to the interactive information generated by a series of virtual controllers emulating keys. .

Compared to haptic devices such as the Oculus Touch or HTC Vive controllers, the advantage of using your own hands is that all movement is intuitive, and even those who have never been in touch with virtual reality can easily get started.

Sweeney said: "The hands of humans have evolved through millions of years. They are so subtle. It would be foolish to replace the hands of a human with a battery-powered device."

Interestingly, Sweeney believes that tactile feedback (feeling what you are touching) is not needed when you interact with virtual reality, even if there are virtual physics objects in the game, just like the rabbit demo in the video above. Feedback in virtual reality. Sweeney said: "Psychologically, I can 'perceive' physical interaction through virtual interaction." Although it sounds absurd, this point has been verified in the reporter's actual experience.

The Handpose team is proud of the project's progress, speed, and efficiency. The software is designed to call minimal computing resources and runs on the CPU only, so you don't have to worry about the graphics card not being able to harness the rich virtual world created.

Sweeney has no way to specify when the project will be open to the public. But he hinted that the software will not only run on VR or AR equipment, but let your hands interact with any IoT device as a "low-power controller". Imagine turning the control lights on and off with only one finger and probably using it together with voice recognition.