The perceived quality of realistic computer graphics imagery depends on the physical accuracy of the rendered frames, as well as the capabilities of the human visual system. Fully detailed, high fidelity frames still may take many minutes, even hours, to render on todayys computers. The human eye is physically incapable of capturing a whole scene in full detail. Humans sense image detail only in a 2 degree foveal region, relying on rapid eye movements, or saccades, to jump between points of interest. The human brain then reassembles these glimpses into a coherent, but inevitably imperfect, visual perception of the environment. In the process, humans literally lose sight of unimportant details. This thesis demonstrates how properties of the human visual system, in particular Change Blindness and Inattentional Blindness, can be exploited to accelerate the rendering of animated sequences by applying a priori knowledge of a viewerys task focus. This thesis shows, via several controlled psychophysical experiments, how human subjects will consistently fail to notice degradations in the quality of image details unrelated to their assigned task, even when these details fall under the viewerys gaze. This thesis has built on these observations to create a perceptual rendering framework, which combines predetermined task maps with spatiotemporal contrast sensitivity to guide a progressive animation system that takes full advantage of image-based rendering techniques. This framework is demonstrated with a Radiance ray-tracing implementation that completes its work in a fraction of the normally required time, with few noticeable artefacts for viewers performing the task.