Ocean going ships typically use depth sensing techniques mainly for locating underwater objects to prevent running into them. This included gauging the distance of the sea floor. The principle involves measuring the time a burst of sound directed into the water takes to return after reflecting off an object. This time of flight gives a measure of its distance from the source of the sound, as the speed of sound traveling in water is fairly constant, depending on the water density and its temperature.
With the advent of peizo electronic devices it was possible to use ultrasonic sound frequencies to measure distance, using the same principle of measuring the time of flight. As better electronic components improved, engineers used the same technique for measuring distances using light waves in place of sound, as using light waves resulted in greater measuring accuracy as well as the ability to measure smaller distances.
Smartphone manufacturers are using depth-sensing techniques to enable facial detection, recognition, and authentication in their devices. However, this technology has far more potential, as Qualcomm is demonstrating. In collaboration with Himax Technologies, Qualcomm is promoting its Spectra image signal processor technology along with a 3-D depth-sensing camera module for Android systems. Very soon, we will be witnessing the emergence of a depth-sensor ecosystem, complete with firmware and apps.
Himax has expertise in module integration, drivers, sensing, and wafer optics. Qualcomm has combined their Spectra imaging technology with the technology from Himax and created the SLiM depth sensor suitable for mobiles. This has ample applications in surveillance, automobiles, virtual reality, and augmented reality. It took more than four years for developing the 3-D sensing solution.
The camera module from Qualcomm senses depth in real-time, and simultaneously generates a 3-D point-cloud of data in both indoor and outdoor situations. Qualcomm expects smartphone manufacturers to begin incorporating the computer vision camera module in their products in the first quarter of 2018.
Using infrared light, the camera module uses the well-known time of flight technique based on speed of light for resolving the distance from an object. The camera projects dots of infrared light onto the object, creating a cloud of points, which the sensor reads for the time of fight, thereby gathering depth information.
Approaches based on depth sensing techniques are gradually moving towards mobile handsets and head-mounted displays. Although mobile platforms may not be able to supply adequate power for room-scale 3-D sensing, they are certainly capable of managing the power required by the sensor and the image signal processor for running the complex software necessary for translating the point-cloud into an interactive and useful input.
The sensor packages use sub-half-watt range active laser illumination for providing high-quality point-clouds for short distances with structured-light solutions for applications involving facial and gesture recognition. However, for serving longer distances such as applications involving room-scale sensing involving a sensing range of 2-10 meters, the sensor packages will have to use high power lasers in the 5-W range.
As the power requirements for longer ranges are beyond those available from average mobile phones, designers are forced to adopt purely camera-based approaches for applications involving longer-distance image recognition.