Hardware-Software Trade-Offs in a Direct Rambus Implementation of the RAMpage Memory Hierarchy

Proc. ASPLOS-VIII Eighth International Conference on Architectural Support for Programming Languages and Operating Systems, San Jose, October 1998, pp. 105-114.

Copyright ACM 1998. Copying for personal and educational use acceptable. See ACM’s copyright policy for full information.

Philip Machanick, Pierre Salverda and Lance Pompe


The RAMpage memory hierarchy is an alternative to the traditional division between cache and main memory: main memory is moved up a level and DRAM is used as a paging device. The idea behind RAMpage is to reduce hardware complexity, if at the cost of software complexity, with a view to allowing more flexible memory system design. This paper investigates some issues in choosing between RAMpage and a conventional cache architecture, with a view to illustrating the trade-offs which can be made in choosing whether to place complexity in the memory system in hardware or software. Performance results in this paper are based on a simple Rambus implementation of DRAM, with performance characteristics of Direct Rambus, which should be available in 1999. This paper explores the conditions under which it becomes feasible to perform a context switch on a miss in the RAMpage model, and the conditions under which RAMpage is a win over a conventional cache architecture: as the CPU-DRAM speed gap grows, RAMpage becomes more viable.


ACM DL Author-ize serviceHardware-software trade-offs in a direct Rambus implementation of the RAMpage memory hierarchy
Philip Machanick, Pierre Salverda, Lance Pompe
ACM SIGPLAN Notices, 1998