On the basis of detailed analysis of reaction times and neurophysiological data from tasks involving choice, it has been proposed that the brain implements an optimal statistical test during simple perceptual decisions. It has been shown recently how this optimal test can be implemented in biologically plausible models of decision networks, but this analysis was restricted to very simplified localist models which include abstract units describing activity of whole cell assemblies rather than individual neurons. This paper derives the optimal parameters in a model of decision network including individual neurons, in which the alternatives are represented by distributed patterns of neuronal activity. It is also shown how the optimal weights in the decision network can be learnt via iterative rules using information accessible for individual synapses. Simulations demonstrate that the network with the optimal synaptic weights achieves better performance and matches fundamental behavioural regularities observed in choice tasks (Hick's law and the relationship between the error rate and the time for decision) better than a network with synaptic weights set according to a standard Hebb rule.