Multithreaded architectures have been developed as a way to hide latencies in memory access, communication, and long pipelines. Caches have been developed to hide latencies and reduce memory bandwidth requirements. Caches do not work well in multithreaded environments, because threads unintentionally evict each others data and instructions. To enable effective use of caches in a multithreaded environment (giving high execution speed even in the context of high memory latencies), we propose to use a cache architecture where the cache can be divided into partitions. Each thread is assigned a set of partitions which are used to cache a view of data structures, or part of the instruction stream. The partition assignment is completely automated in the compiler. With our compiler and architecture, all forms of interference are eliminated and predictable execution of multithreaded programs is achieved in moderately sized caches.