As the gap between processor and memory speeds has increased, cache memory has become an important component that enables a processor to operate at peak performance. However, conventional cache hardware takes no account of the characteristics of specific application programs and may incur a performance penalty by optimising for the average case. These cache systems scale badly as reference complexity is increased and often operate non-deterministicly. This problem is further exacerbated as hardware devices are added to produce incremental increases in performance. Predicting the behaviour of these caches is a difficult problem which can lead to their abandonment in some application areas.
We propose the use, and demonstrate the implementation, of a novel approach to cache architecture, called a partitioned cache. This system exposes the cache to the programmer through specialised access and management instructions and enables its segregation into protected sub-regions. By using automated, compiler-based algorithms to analyse application source code and configure the cache, references to different data objects and instruction streams can be protected from each other to prevent interference. In addition to studying the application of this cache system in the context of C style languages, we also investigate the differing demands presented by object-oriented languages like Java. This work is further extended by investigation into adaptive hardware to control the cache where compiler or programmer based configuration can not be utilised.
Although this approach requires alterations in the instruction set and architecture of the host processor, it can yield significant benefits in performance, device size and determinism. These features are especially useful in the field of real-time programming, where cache determinism is a limiting factor in performance. Additionally, the small size and resulting low thermal and power profiles of the cache are valuable in embedded devices such as smart-cards and PDA systems.