In order for computer graphics to accurately represent real world environments, it is essential that physically based illumination models are used. However, typical global illumination solutions may take many seconds, even minutes to render a single frame. This precludes their use in any interactive system. In this paper we present Rendering on Demand, a selective physically-based parallel rendering system which enables high-fidelity virtual computer graphics imagery to be rendered at close to interactive rates. By exploiting knowledge of the human visual system we substantially reduce computation costs by rendering only the areas of perceptual importance in high quality. The rest of the scene is rendered at a significantly lower quality without the viewer being aware of the quality difference. This is validated through psychophysical experimentation.