Three-layer networks with identical numbers of input and output units were trained using standard backpropagation to reproduce vectors of independent, identically distributed random variables. Data compression was accomplished by virtue of the hidden layer having fewer units than the input or output. The trained nets gave a transparent response to training inputs and a translucent response when anomalies are added to elements of the training set. The reproducing vector closely resembles the unperturbed input. By subtracting the output vector from the input, a novel filter results, since the anomalies are dramatically enhanced in the difference vector.