The developers and users of real-time graphics, such as games and virtual reality, are demanding ever more realistic computer generated images. Despite the availability of modern graphics hardware, such real-time high fidelity graphics is still not feasible on a single PC. Research on visual perception has shown that the perceived quality of rendered graphics depends not only on the fidelity of the generated imagery but also on the characteristics of visual attention and the limitations of the human visual system. The findings of this research have been used to define perceptually driven criteria for rendering with the aim to reduce rendering times. Furthermore, in reality there are strong crossmodal interactions between auditory and visual stimuli, with a number of studies showing that stimuli reaching the various senses are not, in general, processed independently. In this paper we investigate whether auditory stimuli, and more specifically sound effects with abrupt onsets, affect a viewer's perceived quality of rendered images while watching computer generated animations. In fact, we show how we can potentially accelerate the rendering of animations by directing the viewer's attention towards the source of a sound and selectively render at high quality only the sound emitting object. For this purpose a renderer was implemented which selectively renders the sound emitting objects and the surrounding pixels to high quality while the rest of the scene is rendered at a significantly lower quality. A psychophysical experiment with 120 participants was run which revealed a significant effect of sound effects on the perceived rendering quality. Our results show that audio stimuli, and in particular sound effects, can be exploited when rendering animations, to significantly reduce rendering time without any loss in the user's perception of delivered quality.