Google Project Tango: the world of infinite possibilities for developers in 3D

project-tango-googleWhat would happen if you built a 3D sensor on a smartphone?

Google is trying to find an answer to this question and thus introduced Project Tango designed to explore this possibility. Google’s department for advanced technologies (ATAP) Project Tango is a new frontier for Google which opens the door to professional developers willing to push beyond their own opportunities.

To understand what it could mean for the use of a 3D sensor on a smartphone today, you must think to fantasy. Imagine that you have in your hands a device that can understand (and not just see) your surroundings to reconstruct places, ravines, angles, and surfaces. A device of this kind would in fact be able to model a virtual three-dimensional model of the surrounding reality, allowing it to manage the environment in a creative way. At a whole dimension smarter than the usual 2D camera, the implications could be fairly groundbreaking.

“The goal of Project Tango is to give mobile devices a human-scale understanding of space and motion,” says Johnny Lee, leader of Project Tango.

This technology is similar to that of the Israeli company PrimeSense, recently acquired by Apple. PrimeSense produces 3D sensor that’s been designed for mobile devices such as smartphones and tablets.

Project Tango: the smartphone

Although it is a project designed to take Google into the future for its own opportunities, from the outset it hopes to produce a 3D sensor and put it in the hands of the users. There exists a simple prototype but Google states that it is not in any way definitive nor available for marketing.

The prototype of Project Tango is an Android device with 5 inch display that can make a huge number of 3D measurements per second by combining all the data with tilt and orientation detected by internal sensors. A 4 MP camera, motion sensors and the reader is able to explore the environment and capable of analyzing the surrounding environment. The internal hardware of the smartphone is the heart of the data processing, which is essential for the return of an immediate result to the user and the software is the element that can interpret the incoming data, along with the APIs, which will be the framework on which developers can build their own ideas.

“As we walk we use visual cues to navigate and understand the world around us. We note the size and shape of objects and rooms, and found out its position and design almost effortlessly. This awareness of space and movement is fundamental to the way we interact with our environment and with others. We are physical beings living in a 3D world. However, through our mobile devices we understand the physical world ends at the edge of the screen,” adds Lee.

Google has already developed 200 devices and is ready to produce others, but will continue the project on the basis of the ideas coming from the concerned developers. Why Google is doing this–tease the imagination of creative people, tickle the skills of developers and gather new ideas around which it can developed support software for the device.

This tactic seems to have worked for Google with Glass, so Tango appears to be getting the same treatment.

Developers can use the standard Android-applications written in Java, C/C++, as well as the Unity game engine.

Developers intrigued by Project Tango have until March 14 to present their ideas, then Google will estimate how many devices to produce. To submit your suggestion, you must complete the application form and wait for the feedback of Mountain View.

Possibilities – unlimited

The goal of Project Tango in part is to create a portable machine vision tool that allows us to map 3D space using a mobile device. Put like that, without any prior knowledge on the subject, it may sound a little abstract but Google has released video that explains the capabilities of this Android smartphone.

As an example, Google specifies that this technology opens the door to many potential applications, such as taking furniture measurements just by focusing the camera from different angles. Applications for this device could also help visually impaired in their access to video games, because the project can be a first step towards immersive augmented reality games.

To carry out this project, Google has collaborated with researchers from George Washington University and Minnesota, as well as the German company Bosch. In the coming months ATAP intend to release an SDK for creating programs on the new equipment.

About Saroj Kar

Saroj is a Staff Writer at SiliconANGLE covering DevOps, social, mobile and gaming news. If you have a story idea or tip, send it to @SiliconAngle on Twitter.