Skip To Content
ADVERTISEMENT

Imaging the Hidden Road Ahead

confocal imaging setup

In confocal non-line-of-sight (NLOS) imaging, laser light is bounced off a wall around a corner or obstacle. Scattered light returns to the detector and a 3-D image is reconstructed by intensive computer algorithm. [Image: Stanford Computational Imaging Lab]

Lidar, a key technology for autonomous vehicles (AVs), measures direct reflections from objects to determine their distance and form an image of the road ahead (see “Lidar for self-driving cars,” OPN, January 2018). But lidar can only image objects in its line of sight. For added safety and efficiency, it would be useful also to see what lies just around the next corner.

Researchers from Stanford University have now published a computationally efficient way to do just that (Nature, doi: 10.1038/nature25489). The research team believes that the work, which uses confocal non-line-of-sight (NLOS) imaging, could find application not only in AVs but also in robotic vision, remote sensing, defense and medical imaging.

Computational demands

In NLOS imaging, laser light is bounced off a wall around a corner or obstacle to illuminate a hidden object. Photons scattered by the object come back via the reverse route and are recorded. In contrast to lidar, however, the reflected photon recorded in an NLOS scheme might have taken any of an infinite number of paths.

That makes determining where the reflections are coming from and reconstructing a 3-D image computationally demanding, both in memory and in processing power. In addition, the flux of scattered light that returns to the detector is low. This means that acquisition times can be long, even in dark environments, and high-power lasers are required to overcome noise from ambient light.

Confocal scheme

The Stanford team used a confocal technique that, instead of illuminating and imaging pairs of points on the wall, illuminates and images the same point. The result is a photon-count histogram with peaks at two times: the time of the direct reflection from the laser pulse, and the slightly later time of the reflection off of the wall of photons returning from the hidden object. The difference between the two times gives the travel time for light between the hidden object and the wall.

By scanning and performing the same measurement for a dense array of points across the area of interest, the researchers amass similar travel-time data for each scanned point. This enables them to derive a closed-form solution, which they call a light-cone transform, to reconstruct the image. The algorithm works by resampling the data along the time axis and performing a 3-D convolution operation with an inverse filter in the Fourier domain. Resampling the convolved data along the depth dimension recovers an image of the hidden object.

According to the team, performing the two resampling steps with independent transformation matrices and executing the convolution operation in the Fourier domain allow the algorithm to do its work on “large-scale datasets in a computationally and memory-efficient way.” Comparing their method to existing back-projection-type reconstructions, the researchers demonstrated a marked improvement in memory and processing requirements. They also performed experiments outdoors under indirect sunlight and found that their confocal technique provided a significant increase in signal and range for retroreflective objects, like road signs.

Commercial application

The team notes that, as many commercial lidar systems already use a confocal scheme, those systems might be able to adopt the new algorithms with minimal hardware modifications. “We believe the computation algorithm is already ready for lidar systems,” said Matthew O’Toole, a postdoc in Stanford’s Computational Imaging Lab and a co-lead author of the article. “The key question is if the current hardware of lidar systems supports this type of imaging.” Further, the proof-of-concept focused on imaging hidden objects such as retroreflective road signs; less bright objects, such as pedestrians wearing non-reflective clothing, would likely be much harder for the system to pick up.

The team has suggested three ways to improve its system in the future to get closer to real-time frame rates. A more powerful laser would help reduce acquisition times—although, for eye-safety, the laser may need to be in the shortwave-infrared regime. For retroreflective objects, the system can perform multiple measurements in parallel with minimal crosstalk, so building an array of detectors and using a diffuse laser source (rather than scanning a single laser as in the current scheme) could enable single-shot imaging. Finally, implementing the algorithm in a graphics processing unit or field-programmable gate array could reduce the computation time even further.

Publish Date: 19 March 2018

Add a Comment