Algorithm could enable visible-light-based imaging for medical devices, autonomous vehicles

MIT researchers have developed a technique for recovering visual information from light that has scattered because of interactions with the environment — such as passing through human tissue.

The technique could lead to medical-imaging systems that use visible light, which carries much more information than X-rays or ultrasound waves, or to computer vision systems that work in fog or drizzle. The development of such vision systems has been a major obstacle to self-driving cars.

In experiments, the researchers fired a laser beam through a “mask” — a thick sheet of plastic with slits cut through it in a certain configuration, such as the letter A  — and then through a 1.5-centimeter “tissue phantom,” a slab of material designed to mimic the optical properties of human tissue for purposes of calibrating imaging systems. Light scattered by the tissue phantom was then collected by a high-speed camera, which could measure the light’s time of arrival.

From that information, the researchers’ algorithms were able to reconstruct an accurate image of the pattern cut into the mask.

“The reason our eyes are sensitive only in this narrow part of the spectrum is because this is where light and matter interact most,” says Guy Satat, a graduate student at the MIT Media Lab and first author on the new paper. “This is why X-ray is able to go inside the body, because there is very little interaction. That’s why it can’t distinguish between different types of tissue, or see bleeding, or see oxygenated or deoxygenated blood.”

The imaging technique’s potential applications in automotive sensing may be even more compelling than those in medical imaging, however. Many experimental algorithms for guiding autonomous vehicles are highly reliable under good illumination, but they fall apart completely in fog or drizzle; computer vision systems misinterpret the scattered light as having reflected off of objects that don’t exist. The new technique could address that problem.

Satat’s coauthors on the new paper, published today in Scientific Reports, are three other members of the Media Lab’s Camera Culture group: Ramesh Raskar, the group’s leader, Satat’s thesis advisor, and an associate professor of media arts and sciences; Barmak Heshmat, a research scientist; and Dan Raviv, a postdoc.

Expanding circles

Like many of the Camera Culture group’s projects, the new system relies on a pulsed laser that emits ultrashort bursts of light, and a high-speed camera that can distinguish the arrival times of different groups of photons, or light particles. When a light burst reaches a scattering medium, such as a tissue phantom, some photons pass through unmolested; some are only slightly deflected from a straight path; and some bounce around inside the medium for a comparatively long time. The first photons to arrive at the sensor have thus undergone the least scattering; the last to arrive have undergone the most.

Where previous techniques have attempted to reconstruct images using only those first, unscattered photons, the MIT researchers’ technique uses the entire optical signal. Hence its name: all-photons imaging.

The data captured by the camera can be thought of as a movie — a two-dimensional image that changes over time. To get a sense of how all-photons imaging works, suppose that light arrives at the camera from only one point in the visual field. The first photons to reach the camera pass through the scattering medium unimpeded: They show up as just a single illuminated pixel in the first frame of the movie.

The next photons to arrive have undergone slightly more scattering, so in the second frame of the video, they show up as a small circle centered on the single pixel from the first frame. With each successive frame, the circle expands in diameter, until the final frame just shows a general, hazy light.

The problem, of course, is that in practice the camera is registering light from many points in the visual field, whose expanding circles overlap. The job of the researchers’ algorithm is to sort out which photons illuminating which pixels of the image originated where.

Cascading probabilities

The first step is to determine how the overall intensity of the image changes in time. This provides an estimate of how much scattering the light has undergone: If the intensity spikes quickly and tails off quickly, the light hasn’t been scattered much. If the intensity increases slowly and tails off slowly, it has.

On the basis of that estimate, the algorithm considers each pixel of each successive frame and calculates the probability that it corresponds to any given point in the visual field. Then it goes back to the first frame of video and, using the probabilistic model it has just constructed, predicts what the next frame of video will look like. With each successive frame, it compares its prediction to the actual camera measurement and adjusts its model accordingly. Finally, using the final version of the model, it deduces the pattern of light most likely to have produced the sequence of measurements the camera made.

One limitation of the current version of the system is that the light emitter and the camera are on opposite sides of the scattering medium. That limits its applicability for medical imaging, although Satat believes that it should be possible to use fluorescent particles known as fluorophores, which can be injected into the bloodstream and are already used in medical imaging, as a light source. And fog scatters light much less than human tissue does, so reflected light from laser pulses fired into the environment could be good enough for automotive sensing.

“People have been using what is known as time gating, the idea that photons not only have intensity but also time-of-arrival information and that if you gate for a particular time of arrival you get photons with certain specific path lengths and therefore [come] from a certain specific depth in the object,” says Ashok Veeraraghavan, an assistant professor of electrical and computer engineering at Rice University. “This paper is taking that concept one level further and saying that even the photons that arrive at slightly different times contribute some spatial information.”

“Looking through scattering media is a problem that’s of large consequence,” he adds. But he cautions that the new paper does not entirely solve it. “There’s maybe one barrier that’s been crossed, but there are maybe three more barriers that need to be crossed before this becomes practical,” he says.


Substack subscription form sign up