Sunday, January 25, 2026

Week 3

 Week 3 (1/21-1/27)

This week's content was very interesting to me. Brute Force, TSP, Knapsack, Divide & Conquer, Master Theorem, and Depth-First Search and Breadth-First Search Algorithms were the main concepts for this week. I personally found DFS and BFS the most interesting ones, not just because of the development of their graphs but also because of the logic behind them and their functionality in different situations. DFS plans to go deep in a node (vertices) and backtracks when it gets stuck. Meanwhile, BFS plans to identify each neighbor's node first before moving on. TSP and Master Theorem were both easy to understand for me because it is very clear the time efficiency for them, especially in real world's cases. 

I am actually looking forward to next week's content and apply these algorithms in more advanced settings.

Sunday, January 18, 2026

Week 2

 Week 2 ( 1/14-1/20)

This week's content was very math-oriented. Out of all topics, recursive and nonrecursive algorithms, use cases of Theta notation, recurrence relation, and backward substitution, and brute force algorithm, the topic I had the most trouble with was backward substitution. It is good practice to look at different examples to see this algorithm from different perspectives. I attended office hours, which helped me quite a lot. I compare this algorithm to proof by induction in math classes, showing the steps to identify a pattern that eventually leads to the general formula or equation needed to understand the recurrence relation and the initial condition. Below is the example I was having trouble with, along with how each step helped me identify the general number i.

M(n) = M(n-1) + 2 // recurrence relation


As an initial condition, we know that if the input number n is 1, there is no multiplication. We can express it as follows.

M(1) = 0 // initial condition


We can solve the recurrence relation using the backward substitution as below

M(n) = M(n-1) + 2 // replace “M(n-1)” with “M(n-2) +2”

                   = [M(n-2) + 2] + 2

        = M(n-2) + 2 + 2 // replace “M(n-2)” with “M(n-3) +2”

        = [M(n-3) + 2] + 2 + 2

        = M(n-3) + 2 + 2 + 2

        = M(n-i) +2*i // For a general number i, we will get this.

...

        = M(1) + 2*(n-1) // Note that the initial condition M(1) = 0.

        = 0 + 2*(n-1)

        = 2*(n -1) (n)


Sunday, January 11, 2026

Week 1

 Week 1: Algorithms

This week, I learned pseudocode, GCD, graphs, trees, and the algorithm analysis framework. The book has been a great resource for understanding the content before watching the lectures. What I had the most trouble with was the basic operation. I thought the basic operation was only one: the most used operation in an innermost loop to measure the efficiency of the algorithm, but then I learned that there is not only one basic operation. Sometimes, there are a few operations that depend on each other in the same innermost loop, and together they determine the performance needed for the loop to execute its full cycle.

As an example, from the algorithm analysis framework, the pseudocode Display_2D_Stars:

1. Algorithm Display_2D_Stars (n)

2. // Input: A positive integer number n

3. // Output: Display the * symbol in two dimensions.

4. i ← 0             

5. while ( i < n ) do

6.     j ← 0

7.     while ( j < n ) do

8.         write '*'  

9.         j ← j + 1

10.     write '\n'

11.     i ← i + 1    

12. return 0

Lines 7, 8, and 9 are part of the same innermost loop; lines 8 and 9 depend on the execution of line 7 (the execution difference is small compared to the < operation ). Therefore, all operations on lines 7, 8 and 9: <, write and +) are part of the basic operations.

Tuesday, August 12, 2025

Week 8

 Week 8 (8/13-8/15)

This week I want to make a reflection of all the topics and challenges faced throughout these 8 weeks, since in my last journal, I already talked about this week's content: persistence. Before starting this class, I was apprehensive, especially when I learned we would be working with C. I had always heard that C is low-level, unforgiving, and complex compared to higher-level languages I’ve used before. Since I have been leaning toward career paths like cybersecurity or data science, I was more comfortable with analysis than raw systems programming. 

One of the most challenging topics for me was CPU scheduling, particularly the Round Robin algorithm. Implementing time-slicing logic while maintaining correct queue order required a precise understanding of process states and context switching overhead. I iterated through several incorrect solutions before finally deriving a correct implementation after a teammate shared his approach via a recorded walkthrough.

Collaboration and teamwork were essential throughout the course. Each of my team members had strengths in different OS concepts, and knowledge sharing allowed us to fill each other’s gaps. For example, I created a video explaining FIFO, LRU, and Belady’s Anomaly in caching policies, which I had mastered after repeated simulation exercises. In return, a classmate helped me understand segmentation and memory protection mechanisms.

Persistence mechanisms were also challenging, particularly concepts like hard drive rotation and inode-based indexing. Understanding the relationship between rotational delay, seek time, and data transfer rates, along with inode-based metadata and pointer structures, required deep reading. I struggled partly due to limited study time while balancing my internship workload, where I was finalizing deliverables for my project.

For our final project, my team explored Triangulating Python Performance Issues with SCALENE. While I understood SCALENE’s ability to profile CPU, memory, and GPU usage simultaneously, the hardware test bench—an 8-core 4674 MHz AMD Ryzen 7 with 32 GB RAM and an NVIDIA GeForce RTX 2070 running Linux—required me to contextualize performance data in terms of underlying hardware capabilities.

Overall, this course strengthened my understanding of OS fundamentals—process scheduling, memory management, file systems, and performance analysis—while reinforcing the importance of collaborative problem-solving. Thanks to Dr. Ogden for making this class super interesting and for being one of the best professors I've had so far in the program.

Week 7

 Week 7 (8/6-8/12)

This week, we covered persistence. From I/O Devices, I learned that devices can be block (e.g., hard drives, storing fixed-size blocks with random access) or character (e.g., keyboards, handling byte streams). The OS interacts with them via registers—status, command, and data—using polling, interrupts, or Direct Memory Access (DMA). Device drivers isolate OS code from hardware specifics.

From Hard Drives, I discovered that performance depends on rotational delay, seek time, and transfer time. For example, reading 2 MB with a 4 ms rotational delay, 5 ms seek time, and 100 MB/s transfer rate takes 29 ms. Sequential workloads are much faster than random ones, and I/O scheduling (e.g., SSTF, elevator) reduces unnecessary head movement.

From Files and Directories, I learned that files are linear byte arrays with metadata in inodes, while directories map names to inode numbers. Hard links point multiple names to the same inode; symbolic links are files containing paths to targets. Mounting attaches a filesystem to a directory in the system tree, unifying access.

From File System Implementation: Data, I saw that a filesystem uses disk blocks for data, inodes, bitmaps, and a superblock (global info). Inodes store file type, size, and pointers—direct, indirect, or double indirect—to data blocks, enabling large files. Directories are just files containing (name, inode) entries.

From File System Implementation: Access, I learned how to traverse the directory tree to access /foo/bar: starting at the root inode, reading directory blocks to find each component, then retrieving the file’s inode and data blocks. The VSFS simulator showed how operations like mkdir(), creat(), link(), and unlink() change inodes and bitmaps. For example, creating a hard link adds another directory entry to the same inode without duplicating data.

Monday, August 4, 2025

Week 6

 Week 6 (7/30-8/5)

In this week’s readings, PA5, lectures, and quizzes, I learned how condition variables and semaphores are used to manage synchronization in multi-threaded programs. As an example of this, I learned that a condition variable lets threads wait efficiently for some condition to become true, like waiting for a buffer to be non-empty before reading. The Anderson/Dahlin method showed how to design shared objects safely by first building basic object-oriented code and then adding one lock, condition variables, and wait/signal logic in a step-by-step way. This helped me understand how to write safer and more precise threading code, such as in a simple producer-consumer buffer.

I also learned how semaphores can be used for the same types of problems, like enforcing mutual exclusion or synchronizing multiple threads at a barrier. Unlike condition variables, semaphores are more powerful but also more complex—they combine locking and signaling in one tool. For example, to solve the rendezvous problem (where threads must wait for each other), I saw how to use two semaphores to make sure both threads complete the first step before moving on. In the synchronization barrier problem, I learned that we can count how many threads have arrived and use a turnstile pattern (wait then signal) to let all threads through. These examples made abstract ideas more concrete for me.

This week's concepts helped me understand the overall idea of concurrency, especially how to coordinate and manage access to shared resources safely and correctly, which is a key challenge in modern operating systems.

Tuesday, July 29, 2025

Week 5

 Week 5 (7/23-7/29)

This week, I learned in OS that concurrency means letting different parts of a program make progress “together.” I also learned that threads are light‑weight units of execution inside a process, which share memory but each has its own stack and registers. For example, two threads updating a shared counter can cause problems if they both do counter = counter + 1 at the same time.

I also learned that a race condition happens when threads access shared data without coordination, and the result depends on timing. I learned that a critical section is the code that touches shared data, and that a mutex lock ensures only one thread enters at a time. I learned that locks rely on atomic instructions like test‑and‑set to flip a flag in one step without interference.

Another thing I learned about the pthreads API: pthread_create lets me start a thread, pthread_join waits for it to finish, pthread_mutex_lock and pthread_mutex_unlock guard critical sections, and pthread_cond_wait and pthread_cond_signal let threads sleep and wake up based on conditions.

I also learned that I can make a simple counter safe by putting a mutex inside a struct and locking it around increment and get operations. The coarse‑grained locks are easy to write but slow down all threads, while fine‑grained locks let more threads work in parallel but are harder to code.

I finally learned that concurrency helps in real programs like browsers, where one thread loads images while I type in another. I learned that balancing safety and speed means choosing the right locking level for your data structures.

Week 3

 Week 3 (1/21-1/27) This week's content was very interesting to me. Brute Force, TSP, Knapsack, Divide & Conquer, Master Theorem, an...