Virtual Reality (VR) can be used as a powerful tool for the reconstruction of archaeological sites. Accurate reconstructions of sites that are no longer intact can provide insights into the lives and civilisations of earlier cultures. However, as VR applications typically have to render the scene in real-time providing many frames of animation and large quantities of sound information to the user every second, there is little processing power available to provide physically accurate sound and lighting information. In this paper we present a method for pre-rendering the lighting and sound information and then presenting these results to the user in real-time. These results, generated using a physically accurate modelling technique, provide many extra visual and auditory cues to the user, improving their sense of presence within the environment and the realism with which they perceive the reconstruction.