Week 4 (7/16-7/22)
This week's content was very interesting; the labs were particularly helpful in simulating the methods. I learned several memory management methods used by operating systems: paging, Translation Lookaside Buffers (TLBs), multi-level paging, and swapping.
Paging splits virtual memory into fixed-size blocks called pages. It avoids memory fragmentation and simplifies memory allocation. Each virtual address has a virtual page number (VPN) and an offset. The page table translates VPNs into physical page numbers (PFNs). However, paging can be slow and needs large tables, making it less efficient.
To speed things up, a Translation Lookaside Buffer (TLB) caches recent VPN to PFN translations. This cache exploits the fact that programs often reuse the same memory areas repeatedly, a phenomenon known as locality. Using a TLB significantly reduces the average memory access time.
Multi-level paging solves the issue of large page tables. It breaks the page table into smaller chunks, storing only necessary parts. A page directory points to these smaller page tables. While this saves memory space, it makes address translation slightly more complex.
Swapping happens when there's not enough physical memory (RAM). Less-used pages are temporarily moved from RAM to disk. When a needed page isn't in memory, a page fault occurs, prompting the OS to bring the page back from disk. Different policies, such as Least Recently Used (LRU), First-In-First-Out (FIFO), Random, or Optimal (Belady’s policy), determine which pages to swap out. I personally struggled more with Belady's policy, so I had to practice that specific method a bit more to get it right. These methods help manage memory efficiently, balancing speed and resource usage.
No comments:
Post a Comment