The quality of the graphics displayed in motion simulators can play a signficant role in improving a user's training experience in such devices. However, the computation of high-fidelity graphics using traditional rendering approaches takes a substantial amount of time, precluding their use in such an interactive environment. This paper investigates exploiting how the human visual system deals with motion, to drive a selective rendering system. Such a selective renderer computes perceptually important parts of a scene in high quality and the remainder of the scene at a lower quality, and thus at a much reduced computational cost, without the user being aware of this quality difference. In this study we concentrate on translational motion and show, even for this less dramatic form of motion, a viewer's perception of a scene can be significantly effected.
A study was conducted involving 120 subjects in 8 conditions. An additional `button-press' study of 26 subjects was also carried out. The results show that, for both studies, viewers could not notice a decrease in rendering quality when subjected to motion.