Sunday, February 8, 2026

Week 5

 Week 5 (2/4-2/10)

I made a little summary based on what I learned this week's content. I am hoping that this summary helps me to study for the final.

QuickSort:

QuickSort is a divide-and-conquer sort. It picks a pivot, then partitions the list so numbers smaller than the pivot go left and numbers bigger go right. After the first partition, QuickSort recursively sorts the left and right parts.

Partitioning (i and j pointers):

You keep moving i from the left (looking for a number too big) and j from the right (looking for a number too small), in other words, i moves from the left until it finds a value > pivot (too big, belongs on the right). j moves from the right until it finds a value < pivot (too small, belongs on the left). Then you swap those two.

When i and j cross, you swap the pivot into its final spot. 

Median-of-Three Partitioning:

This improves QuickSort when the data is already sorted or reverse-sorted (which can make QuickSort slow). It chooses the pivot as the median of (first, middle, last), then does partitioning using that pivot idea. Also, we can’t use median-of-three when a subarray is size 3 or less.

Binary Tree Traversals:

Traversal is just “the order you visit nodes”:

Preorder: root, left, right

In order: left, root, right

Postorder: left, right, root

Binary Search:

Binary search works on a sorted list. You check the middle value, then go left half or right half depending on whether the target is smaller or bigger. Each step cuts the search space in half, so it’s fast (log time).

DAG and Topological Sorting:

A DAG is a directed graph with no directed cycles.

A topological order is a list of nodes where every arrow goes from earlier to later. If there’s a cycle, you cannot do a topological sort.

Kahn’s Algorithm:

finds nodes with in-degree 0 (no incoming arrows), removes one (using alphabetical order when there are ties), updates in-degrees (depending on incoming arrows), and repeats.

Monday, February 2, 2026

Week 4

 Week 4 (1/28-2/3)

Besides studying mainly for the midterm, this week's content was focused on merge sort. Merge sort is a clear divide-and-conquer method: split the array, sort each half, then merge them back in order. It runs in O(n log n) time, is stable, and works well for large data. The tradeoff is that it needs extra memory for merging, so it’s often used when predictable performance matters most. As always, office hours help me to clarify all my questions and to have a wider view of the theory and the logic. Looking forward to the next half of the class.

Sunday, January 25, 2026

Week 3

 Week 3 (1/21-1/27)

This week's content was very interesting to me. Brute Force, TSP, Knapsack, Divide & Conquer, Master Theorem, and Depth-First Search and Breadth-First Search Algorithms were the main concepts for this week. I personally found DFS and BFS the most interesting ones, not just because of the development of their graphs but also because of the logic behind them and their functionality in different situations. DFS plans to go deep in a node (vertices) and backtracks when it gets stuck. Meanwhile, BFS plans to identify each neighbor's node first before moving on. TSP and Master Theorem were both easy to understand for me because it is very clear the time efficiency for them, especially in real world's cases. 

I am actually looking forward to next week's content and apply these algorithms in more advanced settings.

Sunday, January 18, 2026

Week 2

 Week 2 ( 1/14-1/20)

This week's content was very math-oriented. Out of all topics, recursive and nonrecursive algorithms, use cases of Theta notation, recurrence relation, and backward substitution, and brute force algorithm, the topic I had the most trouble with was backward substitution. It is good practice to look at different examples to see this algorithm from different perspectives. I attended office hours, which helped me quite a lot. I compare this algorithm to proof by induction in math classes, showing the steps to identify a pattern that eventually leads to the general formula or equation needed to understand the recurrence relation and the initial condition. Below is the example I was having trouble with, along with how each step helped me identify the general number i.

M(n) = M(n-1) + 2 // recurrence relation


As an initial condition, we know that if the input number n is 1, there is no multiplication. We can express it as follows.

M(1) = 0 // initial condition


We can solve the recurrence relation using the backward substitution as below

M(n) = M(n-1) + 2 // replace “M(n-1)” with “M(n-2) +2”

                   = [M(n-2) + 2] + 2

        = M(n-2) + 2 + 2 // replace “M(n-2)” with “M(n-3) +2”

        = [M(n-3) + 2] + 2 + 2

        = M(n-3) + 2 + 2 + 2

        = M(n-i) +2*i // For a general number i, we will get this.

...

        = M(1) + 2*(n-1) // Note that the initial condition M(1) = 0.

        = 0 + 2*(n-1)

        = 2*(n -1) (n)


Sunday, January 11, 2026

Week 1

 Week 1: Algorithms

This week, I learned pseudocode, GCD, graphs, trees, and the algorithm analysis framework. The book has been a great resource for understanding the content before watching the lectures. What I had the most trouble with was the basic operation. I thought the basic operation was only one: the most used operation in an innermost loop to measure the efficiency of the algorithm, but then I learned that there is not only one basic operation. Sometimes, there are a few operations that depend on each other in the same innermost loop, and together they determine the performance needed for the loop to execute its full cycle.

As an example, from the algorithm analysis framework, the pseudocode Display_2D_Stars:

1. Algorithm Display_2D_Stars (n)

2. // Input: A positive integer number n

3. // Output: Display the * symbol in two dimensions.

4. i ← 0             

5. while ( i < n ) do

6.     j ← 0

7.     while ( j < n ) do

8.         write '*'  

9.         j ← j + 1

10.     write '\n'

11.     i ← i + 1    

12. return 0

Lines 7, 8, and 9 are part of the same innermost loop; lines 8 and 9 depend on the execution of line 7 (the execution difference is small compared to the < operation ). Therefore, all operations on lines 7, 8 and 9: <, write and +) are part of the basic operations.

Tuesday, August 12, 2025

Week 8

 Week 8 (8/13-8/15)

This week I want to make a reflection of all the topics and challenges faced throughout these 8 weeks, since in my last journal, I already talked about this week's content: persistence. Before starting this class, I was apprehensive, especially when I learned we would be working with C. I had always heard that C is low-level, unforgiving, and complex compared to higher-level languages I’ve used before. Since I have been leaning toward career paths like cybersecurity or data science, I was more comfortable with analysis than raw systems programming. 

One of the most challenging topics for me was CPU scheduling, particularly the Round Robin algorithm. Implementing time-slicing logic while maintaining correct queue order required a precise understanding of process states and context switching overhead. I iterated through several incorrect solutions before finally deriving a correct implementation after a teammate shared his approach via a recorded walkthrough.

Collaboration and teamwork were essential throughout the course. Each of my team members had strengths in different OS concepts, and knowledge sharing allowed us to fill each other’s gaps. For example, I created a video explaining FIFO, LRU, and Belady’s Anomaly in caching policies, which I had mastered after repeated simulation exercises. In return, a classmate helped me understand segmentation and memory protection mechanisms.

Persistence mechanisms were also challenging, particularly concepts like hard drive rotation and inode-based indexing. Understanding the relationship between rotational delay, seek time, and data transfer rates, along with inode-based metadata and pointer structures, required deep reading. I struggled partly due to limited study time while balancing my internship workload, where I was finalizing deliverables for my project.

For our final project, my team explored Triangulating Python Performance Issues with SCALENE. While I understood SCALENE’s ability to profile CPU, memory, and GPU usage simultaneously, the hardware test bench—an 8-core 4674 MHz AMD Ryzen 7 with 32 GB RAM and an NVIDIA GeForce RTX 2070 running Linux—required me to contextualize performance data in terms of underlying hardware capabilities.

Overall, this course strengthened my understanding of OS fundamentals—process scheduling, memory management, file systems, and performance analysis—while reinforcing the importance of collaborative problem-solving. Thanks to Dr. Ogden for making this class super interesting and for being one of the best professors I've had so far in the program.

Week 7

 Week 7 (8/6-8/12)

This week, we covered persistence. From I/O Devices, I learned that devices can be block (e.g., hard drives, storing fixed-size blocks with random access) or character (e.g., keyboards, handling byte streams). The OS interacts with them via registers—status, command, and data—using polling, interrupts, or Direct Memory Access (DMA). Device drivers isolate OS code from hardware specifics.

From Hard Drives, I discovered that performance depends on rotational delay, seek time, and transfer time. For example, reading 2 MB with a 4 ms rotational delay, 5 ms seek time, and 100 MB/s transfer rate takes 29 ms. Sequential workloads are much faster than random ones, and I/O scheduling (e.g., SSTF, elevator) reduces unnecessary head movement.

From Files and Directories, I learned that files are linear byte arrays with metadata in inodes, while directories map names to inode numbers. Hard links point multiple names to the same inode; symbolic links are files containing paths to targets. Mounting attaches a filesystem to a directory in the system tree, unifying access.

From File System Implementation: Data, I saw that a filesystem uses disk blocks for data, inodes, bitmaps, and a superblock (global info). Inodes store file type, size, and pointers—direct, indirect, or double indirect—to data blocks, enabling large files. Directories are just files containing (name, inode) entries.

From File System Implementation: Access, I learned how to traverse the directory tree to access /foo/bar: starting at the root inode, reading directory blocks to find each component, then retrieving the file’s inode and data blocks. The VSFS simulator showed how operations like mkdir(), creat(), link(), and unlink() change inodes and bitmaps. For example, creating a hard link adds another directory entry to the same inode without duplicating data.

Week 5

 Week 5 (2/4-2/10) I made a little summary based on what I learned this week's content. I am hoping that this summary helps me to study ...