Foliage-penetrating ladar technology may improve border surveillance

The United States shares 5,525 miles of land border with Canada and 1,989 miles with Mexico. Monitoring these borders, which is the responsibility of U.S. Customs and Border Protection (CBP), is an enormous task. Detecting, and responding to, illegal activity while facilitating lawful commerce and travel is made more difficult by the expansive, rugged, diverse, and thickly vegetated geography that spans both often-crossed borders. To help mitigate the challenges to border surveillance, a group of researchers at MIT Lincoln Laboratory is investigating whether an airborne ladar system capable of imaging objects under a canopy of foliage could aid in the maintenance of border security by remotely detecting illegal activities. Their work will be presented at the 16th Annual IEEE Symposium on Technologies for Homeland Security to be held April 25-26 in Waltham, Massachusetts.

Requisite for effective border protection is timely, actionable information on areas of interest. Leveraging the laboratory’s long experience in building imaging systems that exploit microchip lasers and Geiger-mode avalanche photodiodes, the research team developed and tested two concepts of operations (CONOPS) for using airborne ladar systems to detect human activity in wooded regions.

“For any new technology to be effectively used by CBP, an emerging sensor must bring with it a sensible deployment architecture and concept of operation,” said John Aldridge, a technical staff member from the Laboratory’s Homeland Protection Systems Group, who has been working with a multidisciplinary, cross-divisional team that includes Marius Albota, Brittany Baker, Daniel Dumanis, Rajan Gurjar, and Lily Lee. The CONOPS that the engineering team focused on were cued examination of a localized area and uncued surveillance of a large area. To demonstrate the approach, the engineering team conducted proof-of-concept experiments with the laboratory’s Airborne Optical Systems Testbed (AOSTB), a Twin Otter aircraft outfitted with an onboard ladar sensor.

For cued surveillance, the use of an airborne ladar sensor platform (whether a piloted or unpiloted aircraft system) might be prompted by another persistent sensor that indicates the presence of activity in a localized area at or near the border. “The area of coverage for cued surveillance may be in the 1 km2 to 10 km2 range, and the laboratory has already developed and demonstrated sensor technology that can achieve this coverage in minutes,” Albota said.

Uncued wide-area surveillance sorties might be flown long distances and over timelines of days or weeks to establish typical activity patterns and to discover emerging paths and structures in high-interest regions. “The area coverage required under such a CONOPS may reach as high as 300 to 800 km of border, depending on the Border Patrol Sector and vegetation density,” Aldridge explained, adding, “Although the current AOSTB’s area coverage rate is limited by the aircraft’s airspeed, the sensor can image such a region in a matter of hours in a single sortie.”

As a start to their field tests to assess their CONOPS, the team flew data collection runs over several local sites identified as representative of the northern U.S. border environment. The sites contained a variety of low-growing brush, thin ground vegetation, very tall coniferous-trees, and leafy deciduous trees. For the tests, the team positioned vehicles, tents, and other camp equipment in the woods to serve as the targets of interest. “We made 40 passes at an altitude of 7,500 feet to allow for a spatial resolution of about 25 centimeters,” Dumanis said. “In between each pass, we moved the concealed items so that we could perform post-process analysis for change and motion detection,” Baker added.

In this post-processing stage, the team members enhanced the data captured during the flights so that human analysts could then inspect the ladar imagery. They digitally removed ground-height data to reveal the three-dimensional ladar point cloud above ground and then digitally thresholded the height (erased 3-D points above a certain height) to eliminate the foliage cover. The resulting images gave analysts Gurjar and Lee a starting point for approximating the locations of both the planted objects as well as objects that were already on scene.

Searching through vast quantities of ladar data to spot areas for careful inspection is a labor intensive task even for experienced analysts who can recognize subtle cues that direct them to the possible presence of objects in the imagery. For the ladar data to be efficiently mined, an automated method of identifying areas of interest is needed. “One of the ways to alert analysts to potential targets is to track changes in the 3-D temporal data,” Lee explained. “Changes caused by vehicle movements or alterations in a customary scene can indicate uncharacteristic activity.”

To begin a change detection approach to the discovery of potential targets of interest, the research team registered the before and after ladar data and then subtracted the before data from the after dataset. This process allowed some improvement in the visual identification of vehicles that appeared where there had been none before; however, even a skilled human analyst would find it difficult to spot the small changes that signaled the presence of a vehicle.

A change detection approach, therefore, must compensate for the challenge posed by clutter in the ladar data. This clutter comes from the nature of ladar collection in densely foliated environment. As light travels through gaps between foliage, it bounces off a surface of leaves, ground, or human-made objects. The returned light is collected by the ladar sensor to form the 3-D point cloud. Because the motion induced by a flying platform causes each ladar scan to travel through different configurations of gaps between leaves, different parts of the canopy and shrubbery are sensed by the ladar. “Much of the clutter in our change detection output is from the different levels of canopy detected from different ladar scans,” explained Gurjar.

To make the ladar change detection data easier for analysts to search, the team looked to automated object detection, a well-established field in computer vision that has been applied to images and radar data. Since ladar data presents in three dimensions and has unique noise characteristics, the team had to enhance the established automated detection approach with a sum of absolute difference (SAD) technique that factors in the height differences used to construct 3-D ladar imagery. Trials of the SAD technique applied to simulated vehicles in a foliated environment demonstrated that the approach yielded high detection rates and has potential as an automated method for reducing the huge amount of ladar data analysts would have to scrutinize to discover objects of interest.

“Looking forward, we hope to improve the capabilities of automated 3-D change detection to be more robust to natural temporal changes in foliage, expand the number of automatically detected object classes, and extend automated detection capability to full 3-D point clouds,” said Lee, with Aldridge adding that they are also interested in exploring alternative aircraft for hosting the ladar system.

In its strategic plan “Vision and Strategy 2020,” the CBP has expressed the need to apply advanced technology solutions for border management. Continued development of Lincoln Laboratory’s automated approach to using a low-cost ladar system for surveillance of foliated regions may in the future offer another tool that the Department of Homeland Security’s CBP can deploy to monitor the growing volume of land border activity.


Substack subscription form sign up