Signal detectability enhancement with auto-associative backpropagation networks

Hanseok Ko, R. H. Baran

Research output: Contribution to journalArticle

4 Citations (Scopus)

Abstract

Feedforward networks having a one-to-one correspondence between input and output units are readily trained using backpropagation to perform auto-associative mappings. A novelty filter is obtained by subtracting the network output from the input vector. Then the presentation of a 'familiar' pattern tends to evoke a null response but any anomalous component is enhanced. This principle motivates the design of an Adaptive Novelty Filter (ANF) to enhance the detectability of weak signals added to a statistically stationary or slowly-varying noise background and to serve as a pre-processor to any device which performs signal detection, estimation, or classification. The ability of the ANF to enhance the detectability of weak signals in wideband ocean acoustic background was measured by comparing the signal-to-noise ratios out of two matched filter detectors one of which received the time series directly while the other received the output of the ANF. The resulting Detectability Enhancement Ratio (DER) was found to increase with the number of hidden units for the first several thousand iterations of the learning algorithm. Subsequent devolution of the network pushes the noise power lower but the DER likewise drops off. We explore the causes of this phenomenon by studying the internal behavior of the auto-associative network as it learns to reconstruct the input vectors as linear combinations of intrinsic basis vectors each of which is defined by the weights of connections fanning out from a single hidden unit to the output layer.

Original languageEnglish
Pages (from-to)219-236
Number of pages18
JournalNeurocomputing
Volume6
Issue number2
DOIs
Publication statusPublished - 1994 Apr 1
Externally publishedYes

Fingerprint

Aptitude
Adaptive filters
Signal-To-Noise Ratio
Backpropagation
Acoustics
Oceans and Seas
Noise
Learning
Weights and Measures
Equipment and Supplies
Matched filters
Signal detection
Learning algorithms
Time series
Signal to noise ratio
Detectors
Power (Psychology)
Psychological Signal Detection

Keywords

  • adaptive novelty filter
  • auto-associative memory
  • backpropagation
  • detection
  • neural network

ASJC Scopus subject areas

  • Artificial Intelligence
  • Cellular and Molecular Neuroscience

Cite this

Signal detectability enhancement with auto-associative backpropagation networks. / Ko, Hanseok; Baran, R. H.

In: Neurocomputing, Vol. 6, No. 2, 01.04.1994, p. 219-236.

Research output: Contribution to journalArticle

@article{73c9cfa526804425968ba003e1e879c1,
title = "Signal detectability enhancement with auto-associative backpropagation networks",
abstract = "Feedforward networks having a one-to-one correspondence between input and output units are readily trained using backpropagation to perform auto-associative mappings. A novelty filter is obtained by subtracting the network output from the input vector. Then the presentation of a 'familiar' pattern tends to evoke a null response but any anomalous component is enhanced. This principle motivates the design of an Adaptive Novelty Filter (ANF) to enhance the detectability of weak signals added to a statistically stationary or slowly-varying noise background and to serve as a pre-processor to any device which performs signal detection, estimation, or classification. The ability of the ANF to enhance the detectability of weak signals in wideband ocean acoustic background was measured by comparing the signal-to-noise ratios out of two matched filter detectors one of which received the time series directly while the other received the output of the ANF. The resulting Detectability Enhancement Ratio (DER) was found to increase with the number of hidden units for the first several thousand iterations of the learning algorithm. Subsequent devolution of the network pushes the noise power lower but the DER likewise drops off. We explore the causes of this phenomenon by studying the internal behavior of the auto-associative network as it learns to reconstruct the input vectors as linear combinations of intrinsic basis vectors each of which is defined by the weights of connections fanning out from a single hidden unit to the output layer.",
keywords = "adaptive novelty filter, auto-associative memory, backpropagation, detection, neural network",
author = "Hanseok Ko and Baran, {R. H.}",
year = "1994",
month = "4",
day = "1",
doi = "10.1016/0925-2312(94)90056-6",
language = "English",
volume = "6",
pages = "219--236",
journal = "Neurocomputing",
issn = "0925-2312",
publisher = "Elsevier",
number = "2",

}

TY - JOUR

T1 - Signal detectability enhancement with auto-associative backpropagation networks

AU - Ko, Hanseok

AU - Baran, R. H.

PY - 1994/4/1

Y1 - 1994/4/1

N2 - Feedforward networks having a one-to-one correspondence between input and output units are readily trained using backpropagation to perform auto-associative mappings. A novelty filter is obtained by subtracting the network output from the input vector. Then the presentation of a 'familiar' pattern tends to evoke a null response but any anomalous component is enhanced. This principle motivates the design of an Adaptive Novelty Filter (ANF) to enhance the detectability of weak signals added to a statistically stationary or slowly-varying noise background and to serve as a pre-processor to any device which performs signal detection, estimation, or classification. The ability of the ANF to enhance the detectability of weak signals in wideband ocean acoustic background was measured by comparing the signal-to-noise ratios out of two matched filter detectors one of which received the time series directly while the other received the output of the ANF. The resulting Detectability Enhancement Ratio (DER) was found to increase with the number of hidden units for the first several thousand iterations of the learning algorithm. Subsequent devolution of the network pushes the noise power lower but the DER likewise drops off. We explore the causes of this phenomenon by studying the internal behavior of the auto-associative network as it learns to reconstruct the input vectors as linear combinations of intrinsic basis vectors each of which is defined by the weights of connections fanning out from a single hidden unit to the output layer.

AB - Feedforward networks having a one-to-one correspondence between input and output units are readily trained using backpropagation to perform auto-associative mappings. A novelty filter is obtained by subtracting the network output from the input vector. Then the presentation of a 'familiar' pattern tends to evoke a null response but any anomalous component is enhanced. This principle motivates the design of an Adaptive Novelty Filter (ANF) to enhance the detectability of weak signals added to a statistically stationary or slowly-varying noise background and to serve as a pre-processor to any device which performs signal detection, estimation, or classification. The ability of the ANF to enhance the detectability of weak signals in wideband ocean acoustic background was measured by comparing the signal-to-noise ratios out of two matched filter detectors one of which received the time series directly while the other received the output of the ANF. The resulting Detectability Enhancement Ratio (DER) was found to increase with the number of hidden units for the first several thousand iterations of the learning algorithm. Subsequent devolution of the network pushes the noise power lower but the DER likewise drops off. We explore the causes of this phenomenon by studying the internal behavior of the auto-associative network as it learns to reconstruct the input vectors as linear combinations of intrinsic basis vectors each of which is defined by the weights of connections fanning out from a single hidden unit to the output layer.

KW - adaptive novelty filter

KW - auto-associative memory

KW - backpropagation

KW - detection

KW - neural network

UR - http://www.scopus.com/inward/record.url?scp=0028416795&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0028416795&partnerID=8YFLogxK

U2 - 10.1016/0925-2312(94)90056-6

DO - 10.1016/0925-2312(94)90056-6

M3 - Article

VL - 6

SP - 219

EP - 236

JO - Neurocomputing

JF - Neurocomputing

SN - 0925-2312

IS - 2

ER -