Last modified: Tue Feb 24 22:21:34 CST
We will study 5.1-5.2 closely, as well as the 1st, 2nd, and 7th techniques in 5.3 (pp. 390-397 and 405-411). We'll skip 5.4-5.6, and study 5.7 closely. Skim 5.8, and use 5.9-5.12, particularly Figure 5.53, to cement your understanding of the material in 5.1-5.3 and 5.7. You can read 5.13 for fun. We'll emphasize cache a bit more than virtual memory, since we also study VM in the Operating Systems course.
I will present some varying points of view that are missing or not emphasized in the text.
Here is a schematic and simplified, rather than temporally precise, sketch of the historical development of memory hierarchy. It is important to understand the history, because the terminology and even to some extend the organizational structure are a result of a sequence of steps, each taken without realizing what would happen later.
Historically, the memory hierarchy grew out of a ``physical'' array of random-access storage called ``main memory.'' In the early days, main memory was also called ``core memory,'' because it was built with little magnetic donuts, called ``cores.'' Now, it is often called ``RAM'' (Random Access Memory), although other memory elements also have random access without having that name. The word ``physical'' is peculiar in computer systems and architecture. In general, it just refers to something that's implemented at a lower level than something else, which is called ``virtual.'' But, there's really no fixed boundary between the physical and the virtual. In the case of memory, ``physical'' usually means that the geometrical location of a bit in the storage medium is determined by its address. Notice that this sort of physicalness is a property of the addressing method, rather than the storage medium.
Architecture, OS, and compiler people almost always view main memory as the object manipulated by programs (database people focus on disks). But, there was essentially always a hierarchy growing in two directions: