Tuesday, July 29, 2025

Week 5

 Week 5 (7/23-7/29)

This week, I learned in OS that concurrency means letting different parts of a program make progress “together.” I also learned that threads are light‑weight units of execution inside a process, which share memory but each has its own stack and registers. For example, two threads updating a shared counter can cause problems if they both do counter = counter + 1 at the same time.

I also learned that a race condition happens when threads access shared data without coordination, and the result depends on timing. I learned that a critical section is the code that touches shared data, and that a mutex lock ensures only one thread enters at a time. I learned that locks rely on atomic instructions like test‑and‑set to flip a flag in one step without interference.

Another thing I learned about the pthreads API: pthread_create lets me start a thread, pthread_join waits for it to finish, pthread_mutex_lock and pthread_mutex_unlock guard critical sections, and pthread_cond_wait and pthread_cond_signal let threads sleep and wake up based on conditions.

I also learned that I can make a simple counter safe by putting a mutex inside a struct and locking it around increment and get operations. The coarse‑grained locks are easy to write but slow down all threads, while fine‑grained locks let more threads work in parallel but are harder to code.

I finally learned that concurrency helps in real programs like browsers, where one thread loads images while I type in another. I learned that balancing safety and speed means choosing the right locking level for your data structures.

Monday, July 21, 2025

Week 4

 Week 4 (7/16-7/22)

This week's content was very interesting; the labs were particularly helpful in simulating the methods. I learned several memory management methods used by operating systems: paging, Translation Lookaside Buffers (TLBs), multi-level paging, and swapping.

Paging splits virtual memory into fixed-size blocks called pages. It avoids memory fragmentation and simplifies memory allocation. Each virtual address has a virtual page number (VPN) and an offset. The page table translates VPNs into physical page numbers (PFNs). However, paging can be slow and needs large tables, making it less efficient.

To speed things up, a Translation Lookaside Buffer (TLB) caches recent VPN to PFN translations. This cache exploits the fact that programs often reuse the same memory areas repeatedly, a phenomenon known as locality. Using a TLB significantly reduces the average memory access time.

Multi-level paging solves the issue of large page tables. It breaks the page table into smaller chunks, storing only necessary parts. A page directory points to these smaller page tables. While this saves memory space, it makes address translation slightly more complex.

Swapping happens when there's not enough physical memory (RAM). Less-used pages are temporarily moved from RAM to disk. When a needed page isn't in memory, a page fault occurs, prompting the OS to bring the page back from disk. Different policies, such as Least Recently Used (LRU), First-In-First-Out (FIFO), Random, or Optimal (Belady’s policy), determine which pages to swap out. I personally struggled more with Belady's policy, so I had to practice that specific method a bit more to get it right. These methods help manage memory efficiently, balancing speed and resource usage.

Saturday, July 12, 2025

Week 3

 Week 3 (7/9-7/15)

What did I learn this week in CST 334?

I learned different concepts this week. The book and lectures teach how computers use virtual memory to let each program act like it has its own private memory, even though all programs share the same physical RAM. This is done through a system of address translation, where virtual addresses (used by programs) are converted into physical addresses (used by the hardware).  One method is base-and-bounds, where each process is assigned a block of memory, and hardware checks ensure it stays within limits. A more flexible method is segmentation, which breaks memory into parts like code, stack, and heap with their own base and size, allowing for better protection and organization.

The book, lecture videos, and assignment for this week include simulations showing how virtual addresses are split and mapped to physical memory, making concepts like translation clearer. They also explain address spaces, which are the ranges of addresses a process can use. Each process has its own address space to prevent interference. To manage memory efficiently, the operating system uses free space management techniques. These include first fit (choosing the first block big enough), best fit (choosing the smallest block that fits), and worst fit (choosing the largest block to avoid leaving tiny gaps). These strategies help reduce wasted memory and prevent fragmentation, when free memory is split into unusable pieces. The chapters and assignments also discuss how malloc() and free() in C are used to request and release memory manually, and the risks of doing so incorrectly, such as memory leaks. Overall, this week's content builds a strong foundation for understanding how the OS manages memory safely, efficiently, and fairly among processes.


Monday, July 7, 2025

Week 2

 Week 2 (7/2/2025-7/8/25)

What did I learn this week at CST334:

The major tools I learned this week in OS are processes, limited directed execution, CPU scheduling, MLFQ, and C Process API. I learned that limited direct execution is used to ensure efficient multitasking. It lets user programs run directly on the CPU, but in a safe way using user mode. The OS uses traps and interrupts to regain control when needed, such as during I/O or when a timer expires. This allows the OS to safely switch between processes, a process known as context switching.

I also learned that processes are created and managed by the OS using tools like fork() (to create a new process), exec() (to replace the process code), and wait() (to wait for a child process to finish). Each process has its own state and memory.

I also studied different ways the OS schedules which process runs next. CPU scheduling policies determine the order in which processes are executed. First-Come, First-Served (FCFS) is simple but can lead to poor performance. Shortest Job First (SJF) minimizes average turnaround time if job lengths are known. Shortest Time to Completion First (STCF) is a preemptive version of SJF. Round Robin (RR) assigns each process a short time slice in turn, improving response time but potentially increasing turnaround time.

Finally, I learned about Multi-Level Feedback Queue (MLFQ), which adjusts process priorities over time. Short or interactive jobs stay at a higher priority, while longer CPU-bound jobs move lower. To prevent starvation, MLFQ boosts all jobs back to high priority from time to time.

Week 8

 Week 8 (8/13-8/15) This week I want to make a reflection of all the topics and challenges faced throughout these 8 weeks, since in my last ...