The advanced intonation model for speech synthesis described here has a three level architecture. An initial abstract characterisation designed to represent intonation at cognitive percept level is rewritten to an intermediate representation which, though speaker-independent, accurately reflects physical pitch contours. At this stage the contours lack the variability we associate with natural speech. This representation is then further rewritten to provide an actual physical contour (now including variability and other 'natural' phenomena such as micro-intonation). One or two examples are given for stages one and two of the process, and some indication of how we tackle stage three.