Lidar rangefinders, which are common tools in surveying and in autonomous-vehicle control, among other applications, gauge depth by emitting short bursts of laser light and measuring the time it takes for reflected photons to arrive back and be detected.
In this week’s issue of the journal Science, researchers from MIT’s Research Laboratory of Electronics (RLE) describe a new lidar-like system that can gauge depth when only a single photon is detected from each location. Since a conventional lidar system would require about 100 times as many photons to make depth estimates of similar accuracy under comparable conditions, the new system could yield substantial savings in energy and time — which are at a premium in autonomous vehicles trying to avoid collisions.
The system can also use the same reflected photons to produce images of a quality that a conventional imaging system would require 900 times as much light to match — and it works much more reliably than lidar in bright sunlight, when ambient light can yield misleading readings. All the hardware it requires can already be found in commercial lidar systems; the new system just deploys that hardware in a manner more in tune with the physics of low light-level imaging and natural scenes.
Count the photons
As Ahmed Kirmani, a graduate student in MIT’s Department of Electrical Engineering and Computer Science and lead author on the new paper, explains, the very idea of forming an image with only a single photon detected at each pixel location is counterintuitive. “The way a camera senses images is through different numbers of detected photons at different pixels,” Kirmani says. “Darker regions would have fewer photons, and therefore accumulate less charge in the detector, while brighter regions would reflect more light and lead to more detected photons and more charge accumulation.”
In a conventional lidar system, the laser fires pulses of light toward a sequence of discrete positions, which collectively form a grid; each location in the grid corresponds to a pixel in the final image. The technique, known as raster scanning, is how old cathode-ray-tube televisions produced images, illuminating one phosphor dot on the screen at a time.
The laser will generally fire a large number of times at each grid position, until it gets consistent enough measurements between the times at which pulses of light are emitted and reflected photons are detected that it can rule out the misleading signals produced by stray photons. The MIT researchers’ system, by contrast, fires repeated bursts of light from each position in the grid only until it detects a single reflected photon; then it moves on to the next position.
A highly reflective surface — one that would show up as light rather than dark in a conventional image — should yield a detected photon after fewer bursts than a less-reflective surface would. So the MIT researchers’ system produces an initial, provisional map of the scene based simply on the number of times the laser has to fire to get a photon back.
Filtering out noise
The photon registered by the detector could, however, be a stray photodetection generated by background light. Fortunately, the false readings produced by such ambient light can be characterized statistically; they follow a pattern known in signal processing as “Poisson noise.”
Simply filtering out noise according to the Poisson statistics would produce an image that would probably be intelligible to a human observer. But the MIT researchers’ system does something cleverer: It guides the filtering process by assuming that adjacent pixels will, more often than not, have similar reflective properties and will occur at approximately the same depth. That assumption enables the system to filter out noise in a more principled way.
Kirmani developed the computational imager together with his advisor, Vivek Goyal, a research scientist in RLE, and other members of Goyal’s Signal Transformation and Information Representation Group. Researchers in the Optical and Quantum Communications Group, which is led by Jeffrey Shapiro, the Julius A. Stratton Professor of Electrical Engineering, and senior research scientist Franco Wong, ran the experiments reported in the Science paper, which contrasted the new system’s performance with that of a conventional lidar system.
“They’ve used a very clever set of information-theoretic techniques to extract a lot of information out of just a few photons, which is really quite incredible, and they’ve been able to do it in the presence of a lot of background noise, which is also impressive,” says John Howell, a professor of physics at the University of Rochester. “Another thing that’s really fascinating is that they’re also getting intensity information out of a single photon, which almost doesn’t make sense.”
Howell believes that the technique could be broadly applicable. “There are many situations in which you are light-starved,” he says. “That could mean that you have a light source that’s weak, or it could be that you’re interrogating a biological sample, and too much light could damage it. Our eyes are a very good example of this, but other biological systems are the same. There could also be remote-sensing applications where you may want to look at something, but you don’t want to give away that you’re illuminating that area.”