The RAMpage memory hierarchy addresses the growing concern about the memory wall -- the possibility that the CPU-DRAM speed gap will ultimately limit the benefits of rapid improvement in CPU speed. Reducing references to DRAM is an increasingly desirable goal as CPU speed improves relative to DRAM. As the cost of a DRAM reference increases, it makes increasing sense to consider options like pinning crucial parts of the operating system in at least the lowest-level cache, and to consider possibilities like context switches on references to DRAM. All these factors combine to make it increasingly desirable to treat DRAM as a paging device, while moving the main memory a level up to the lowest level of SRAM. The RAMpage hierarchy relegates DRAM to the role of a first-level paging device. Results presented here are for a preliminary simulation of the RAMpage hierarchy, and show that, if current memory system and CPU trends continue, the RAMpage strategy will become increasingly viable. Even with current miss costs and without implementing all features favourable to the RAMpage hierarchy, simulations show that it is possible to achieve run times up to 25% faster than those for a conventional hierarchy. Furthermore, RAMpage scales up better than the conventional hierarchy (as simulated) in that performance degrades less as DRAM reference costs increase.