Huawei to take the form of touch-less, gesture-based interfaces for smartphones.

WorldWide Tech & Science. Francisco De Jesús.

Huawei to take the form of touch-less, gesture-based interfaces for smartphones.

Various manufacturers are looking at different UIs and experiences beyond touch, and Huawei has announced that it is heavily investing in research and development to come with the next disruptive technology, which may take the form of touch-less, gesture-based interfaces.

In 2011, Huawei had invested $3.75 billion in R&D and the company had hired 11,000 new employees. Part of that research will be to create new gesture-based applications for its mobile products. This year, Huawei is expected to expand its investment by 20% to about $4.5 billion.

The end-result of all this spending? Three-dimensional UIs and interactions. According to John Rose, general manager for Huawei’s North America R&D, “So imagine instead of touching a smartphone, you can actually have a three-dimensional interaction with it.”

Huawei says that these gestures and capabilities would be slowly introduced and over time, we’ll have more complex UIs and interactions. For now, tablets are targeted first.

How this Technology works?


Though current touchscreen technology–like those made by Synaptics–allow for up to ten points of input meaning that the screen can recognize up to ten fingers at a time, today’s phones and tablets only allow for a limited number of fingers to be used through gestures. Part of this limitation is due to the size of the device and the size of the display, despite the growing trend for smartphones to adopt increasing display sizes. Removing the screen from the equation would mean that UIs would not longer have to be confined to the screen. Users can draw and gesture what they want to do in the air in front of the device, and their interactions would be captured by the phone’s front-facing camera.

This means that users have more ‘surface area’ to make their commands and gestures, and new commands and gestures can open up on new devices. Pinch, zoom, rotate, shift, draw, doodle–all that can now be captured with the phone’s camera, and users are only limited by the camera’s field of view.

Touch-less gestures would require a camera or some sort of sensor at its most basic form of input with the phone, and we’ve seen some early incarnation of gesture-based applications. Microsoft’s Kinect, which allows users to control their home Xbox 360 gaming experience through a camera accessory, is one such example. On the mobile side, we’ve heard Sony announcing that the Xperia Sola will allow users to use some primitive form of touch-less interactions and Pantech has the Vega Android smartphone on the market that allows for gestures. Additionally, in a recent patent application, Sony wants to place a camera behind the phone’s display to more easily and accurately capture gestures.