Red Hat Linux 8.0: The Official Red Hat Linux System Administration Primer | ||
---|---|---|
Prev | Chapter 4. Physical and Virtual Memory | Next |
First, we should introduce a new concept: virtual address space. As the term implies, the virtual address space is the program's address space — how much memory the program would require if it needed all the memory at once. But there is an important distinction; the word "virtual" means that this is the total number of uniquely-addressable memory locations required by the application, and not the amount of physical memory that must be dedicated to the application at any given time.
In the case of our example application, its virtual address space is 15000 bytes.
In order to implement virtual memory, it is necessary for the computer system to have special memory management hardware. This hardware is often known as an MMU (Memory Management Unit). Without an MMU, when the CPU accesses RAM, the actual RAM locations never change — memory address 123 is always the same physical location within RAM.
However, with an MMU, memory addresses go through a translation step prior to each memory access. For example, this means that memory address 123 might be directed to physical address 82043. As it turns out, the overhead of individually tracking the virtual to physical translations for billions of bytes of memory would be too much. Instead, the MMU divides RAM into pages — contiguous sections of memory that are handled by the MMU as single entities.
Tip | |
---|---|
Each operating system has its own page size; in Linux (for the x86 architecture), each page is 4096 bytes long. |
Keeping track of these pages and their address translations might sound like an unnecessary and confusing additional step, but it is, in fact, crucial to implementing virtual memory. For the reason why, consider the following point.
Taking our hypothetical application with the 15000 byte virtual address space, assume that the application's first instruction accesses data stored at address 12374. However, also assume that our computer only has 12288 bytes of physical RAM. What happens when the CPU attempts to access address 12374?
What happens is known as a page fault. Next, let us see what happens during a page fault.
First, the CPU presents the desired address (12374) to the MMU. However, the MMU has no translation for this address. So, it interrupts the CPU, and causes software known as a page fault handler to be executed. The page fault handler then determines what must be done to resolve this page fault. It can:
Find where the desired page resides on disk and read it in (this is normally the case if the page fault is for a page of code)
Determine that the desired page is already in RAM (but not allocated to the current process) and direct the MMU to point to it
Point to a special page containing nothing but zeros and later allocate a page only if the page is ever written to (this is called a copy on write page)
Get it from somewhere else (more on this later)
While the first three actions are relatively straightforward, the last one is not. For that, we need to cover some additional topics.
The group of physical memory pages currently dedicated to a specific process is known as the working set for that process. The number of pages in the working set can grow and shrink, depending on the overall availability of pages on a system-wide basis.
The Linux kernel keeps a list of all the pages that are actively being used. This list is known as the active list. As pages become less actively used, they eventually move to another list known as the inactive list.
The working set will grow as a process page faults (and those page faults are handled and added to the active list). The working set will shrink as fewer and fewer free pages exist — pages on the inactive list are removed from the process's working set. The operating system will shrink processes' working sets by:
Writing modified pages to the system swap space and putting the page in the swap cache[1]
Marking unmodified pages as being available (there is no need to write these pages out to disk as they have not changed)
In other words, the Linux memory management subsystem selects the least-recently used pages (via the inactive list) to be removed from process working sets.
While swapping (writing modified pages out to the system swap space) is a normal part of a Red Hat Linux system's operation, it is possible for a system to experience too much swapping. The reason to be wary of excessive swapping is that the following situation can easily occur, over and over again:
Pages from a process are swapped
The process becomes runnable and attempts to access a swapped page
The page is faulted back into memory
A short time later, the page is swapped out again
If this sequence of events is widespread, it is known as thrashing and is normally indicative of insufficient RAM for the present workload. Thrashing is extremely detrimental to system perform, as the CPU and I/O loads that can be generated in such a situation can quickly outweigh the load imposed by system's real work. In extreme cases, the system may actually do no useful work, spending all its resources on moving pages to and from memory.
[1] | Under Red Hat Linux the system swap space is normally a dedicated swap partition, though swap files can also be configured and used. |