This paper presents a biologically plausible mechanism of back-propagating network output error to previous layers of processing in a particular multi-layer neural network. This mechanism is used in a network that is designed to mimic familiarity discrimination as performed by the perirhinal cortex of the temporal lobe. In the algorithm, the error of the network during an initial classification period regulates the frequency of neuronal activity in a succeeding memorising period via an inhibitory circuit, such that the frequency in this memorising period is proportional to the error. Synaptic weight modifications are made according to activity-dependent Hebbian rules, such as may be used in the brain. The magnitude of the modification depends on the frequency of the activity. Hence, the magnitude of weight modification is proportional to the network error.