### Abstract

Feedforward networks having a one-to-one correspondence between input and output units are readily trained using backpropagation to perform auto-associative mappings. A novelty filter is obtained by subtracting the network output from the input vector. Then the presentation of a 'familiar' pattern tends to evoke a null response but any anomalous component is enhanced. This principle motivates the design of an Adaptive Novelty Filter (ANF) to enhance the detectability of weak signals added to a statistically stationary or slowly-varying noise background and to serve as a pre-processor to any device which performs signal detection, estimation, or classification. The ability of the ANF to enhance the detectability of weak signals in wideband ocean acoustic background was measured by comparing the signal-to-noise ratios out of two matched filter detectors one of which received the time series directly while the other received the output of the ANF. The resulting Detectability Enhancement Ratio (DER) was found to increase with the number of hidden units for the first several thousand iterations of the learning algorithm. Subsequent devolution of the network pushes the noise power lower but the DER likewise drops off. We explore the causes of this phenomenon by studying the internal behavior of the auto-associative network as it learns to reconstruct the input vectors as linear combinations of intrinsic basis vectors each of which is defined by the weights of connections fanning out from a single hidden unit to the output layer.

Original language | English |
---|---|

Pages (from-to) | 219-236 |

Number of pages | 18 |

Journal | Neurocomputing |

Volume | 6 |

Issue number | 2 |

DOIs | |

Publication status | Published - 1994 Apr 1 |

Externally published | Yes |

### Fingerprint

### Keywords

- adaptive novelty filter
- auto-associative memory
- backpropagation
- detection
- neural network

### ASJC Scopus subject areas

- Artificial Intelligence
- Cellular and Molecular Neuroscience

### Cite this

*Neurocomputing*,

*6*(2), 219-236. https://doi.org/10.1016/0925-2312(94)90056-6

**Signal detectability enhancement with auto-associative backpropagation networks.** / Ko, Hanseok; Baran, R. H.

Research output: Contribution to journal › Article

*Neurocomputing*, vol. 6, no. 2, pp. 219-236. https://doi.org/10.1016/0925-2312(94)90056-6

}

TY - JOUR

T1 - Signal detectability enhancement with auto-associative backpropagation networks

AU - Ko, Hanseok

AU - Baran, R. H.

PY - 1994/4/1

Y1 - 1994/4/1

N2 - Feedforward networks having a one-to-one correspondence between input and output units are readily trained using backpropagation to perform auto-associative mappings. A novelty filter is obtained by subtracting the network output from the input vector. Then the presentation of a 'familiar' pattern tends to evoke a null response but any anomalous component is enhanced. This principle motivates the design of an Adaptive Novelty Filter (ANF) to enhance the detectability of weak signals added to a statistically stationary or slowly-varying noise background and to serve as a pre-processor to any device which performs signal detection, estimation, or classification. The ability of the ANF to enhance the detectability of weak signals in wideband ocean acoustic background was measured by comparing the signal-to-noise ratios out of two matched filter detectors one of which received the time series directly while the other received the output of the ANF. The resulting Detectability Enhancement Ratio (DER) was found to increase with the number of hidden units for the first several thousand iterations of the learning algorithm. Subsequent devolution of the network pushes the noise power lower but the DER likewise drops off. We explore the causes of this phenomenon by studying the internal behavior of the auto-associative network as it learns to reconstruct the input vectors as linear combinations of intrinsic basis vectors each of which is defined by the weights of connections fanning out from a single hidden unit to the output layer.

AB - Feedforward networks having a one-to-one correspondence between input and output units are readily trained using backpropagation to perform auto-associative mappings. A novelty filter is obtained by subtracting the network output from the input vector. Then the presentation of a 'familiar' pattern tends to evoke a null response but any anomalous component is enhanced. This principle motivates the design of an Adaptive Novelty Filter (ANF) to enhance the detectability of weak signals added to a statistically stationary or slowly-varying noise background and to serve as a pre-processor to any device which performs signal detection, estimation, or classification. The ability of the ANF to enhance the detectability of weak signals in wideband ocean acoustic background was measured by comparing the signal-to-noise ratios out of two matched filter detectors one of which received the time series directly while the other received the output of the ANF. The resulting Detectability Enhancement Ratio (DER) was found to increase with the number of hidden units for the first several thousand iterations of the learning algorithm. Subsequent devolution of the network pushes the noise power lower but the DER likewise drops off. We explore the causes of this phenomenon by studying the internal behavior of the auto-associative network as it learns to reconstruct the input vectors as linear combinations of intrinsic basis vectors each of which is defined by the weights of connections fanning out from a single hidden unit to the output layer.

KW - adaptive novelty filter

KW - auto-associative memory

KW - backpropagation

KW - detection

KW - neural network

UR - http://www.scopus.com/inward/record.url?scp=0028416795&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0028416795&partnerID=8YFLogxK

U2 - 10.1016/0925-2312(94)90056-6

DO - 10.1016/0925-2312(94)90056-6

M3 - Article

VL - 6

SP - 219

EP - 236

JO - Neurocomputing

JF - Neurocomputing

SN - 0925-2312

IS - 2

ER -