Page 15 - Spring2019
P. 15

 Figure 4. Left: schematic showing the source-sensor configuration for passive ranging by a wave front curvature. Right: cross-correlation functions for the outputs of sensors 1,2 and sensors 2,3. The time lags corresponding to the peaks in the two cross-correlograms pro- vide estimates of the time delays τ1,2 and τ2,3 . These time delays are substituted into the passive ranging equation to calculate the range (R). c, Speed of sound traveling in the underwater medium; d, in- tersensor separation distance. From Ferguson and Cleary (2001).
adjacent pairs of sensors enables estimation of the source range from the middle array and the source bearing with respect to the longitudinal axis of the wide-aperture array. Measuring a time delay involves cross-correlating the receiver outputs. The time delay corresponds to the time lag at which the cross-cor- relation function attains its maximum value.
Figure 4, right, shows the cross-correlation functions for sensor pairs 1,2 and 2,3. In practice, arrays of sensors (rather than the single sensors shown here) are used to provide array gain. It is the beamformed outputs that are cross-correlat- ed to improve the estimates of the time delays. Each array samples the underwater sound field for 20 s, then the com- plex weights are calculated so that an adaptive beamformer maximizes the array gain, suppresses side lobes, and auto- matically steers nulls in the directions of interference. The process is repeated every 20 s because the relative contribu- tions of the signal, noise, and interference components to the underwater sound field can vary over a period of minutes. In summary, beamforming and prefiltering suppress extrane- ous peaks, which serve to highlight the peak associated with a contact (Ferguson, 1993b).
Battlefield Acoustics
Extracting Tactical Information with a Microphone
In 1995, sonar system research support to the Royal Aus- tralian Navy Oberon Submarine Squadron was concluded, which ushered in a new research and development program for the Australian Army on the use of acoustics on the battle- field. Battlefield acoustics had gone out of fashion and be-
came dormant after the invention of radar in the 1930s. The Australian Army’s idea was that the signal-processing tech- niques developed for the hydrophones of a submarine be used for microphones deployed on the battlefield. The goal was “low-cost intelligent acoustic-sensing nodes operating on shoestring power budgets for years at a time in potentially hostile environments without hope of human intervention” (Hill et al., 2004).
The sensing of sound on the battlefield makes sense because acoustic sensors are passive;
• sound propagation is not limited by line of sight;
• the false-alarm rate is negligible due to smart signal
processing;
• unattended operation with only minimal tactical
information (what, when, where) being communicated
to a central monitoring facility;
• sensors are light weight, low cost, compact, and robust;
• acoustic systems can cue other systems such as a cameras,
radars, and weapons;
• acoustic signatures of air and ground vehicles enable
rapid classification of type of air or ground vehicle as well
as weapon fire; and
• military activities are inherently noisy.
The submarine approach to wide-area surveillance requires arrays with large numbers of closely spaced sensors (subma- rines bristle with sensors) and the central processing of the acoustic sound field information on board the submarine. In contrast, wide-area surveillance of the battlefield is achieved by dispersing acoustic sensor nodes throughout the surveil- lance area, then networking them and using decentralized data fusion to compile the situational awareness picture. On the battlefield, only a minimal number of sensors (often, just one) is required for an acoustic-sensing node to extract the tactical information (position, speed, range at the closest point of approach to the sensor) of a contact and to classify it. For example, in 2014, the Acoustical Society of America (ASA) Technical Committee on Signal Processing in Acous- tics posed an international student challenge problem (avail- able at bit.ly/1rjN3AG) where the students were given a 30-s sound file of a truck traveling past a microphone (Ferguson and Culver, 2014). By processing the sound file, the students were required to extract the tactical information so that they could respond to the following questions.
• What is the speedometer reading?
• What is the tachometer reading?
• How many cylinders does the engine have?
 Spring 2019 | Acoustics Today | 13









































































   13   14   15   16   17