Saturday, July 12, 2025

Week 3

 Week 3 (7/9-7/15)

What did I learn this week in CST 334?

I learned different concepts this week. The book and lectures teach how computers use virtual memory to let each program act like it has its own private memory, even though all programs share the same physical RAM. This is done through a system of address translation, where virtual addresses (used by programs) are converted into physical addresses (used by the hardware).  One method is base-and-bounds, where each process is assigned a block of memory, and hardware checks ensure it stays within limits. A more flexible method is segmentation, which breaks memory into parts like code, stack, and heap with their own base and size, allowing for better protection and organization.

The book, lecture videos, and assignment for this week include simulations showing how virtual addresses are split and mapped to physical memory, making concepts like translation clearer. They also explain address spaces, which are the ranges of addresses a process can use. Each process has its own address space to prevent interference. To manage memory efficiently, the operating system uses free space management techniques. These include first fit (choosing the first block big enough), best fit (choosing the smallest block that fits), and worst fit (choosing the largest block to avoid leaving tiny gaps). These strategies help reduce wasted memory and prevent fragmentation, when free memory is split into unusable pieces. The chapters and assignments also discuss how malloc() and free() in C are used to request and release memory manually, and the risks of doing so incorrectly, such as memory leaks. Overall, this week's content builds a strong foundation for understanding how the OS manages memory safely, efficiently, and fairly among processes.


Monday, July 7, 2025

Week 2

 Week 2 (7/2/2025-7/8/25)

What did I learn this week at CST334:

The major tools I learned this week in OS are processes, limited directed execution, CPU scheduling, MLFQ, and C Process API. I learned that limited direct execution is used to ensure efficient multitasking. It lets user programs run directly on the CPU, but in a safe way using user mode. The OS uses traps and interrupts to regain control when needed, such as during I/O or when a timer expires. This allows the OS to safely switch between processes, a process known as context switching.

I also learned that processes are created and managed by the OS using tools like fork() (to create a new process), exec() (to replace the process code), and wait() (to wait for a child process to finish). Each process has its own state and memory.

I also studied different ways the OS schedules which process runs next. CPU scheduling policies determine the order in which processes are executed. First-Come, First-Served (FCFS) is simple but can lead to poor performance. Shortest Job First (SJF) minimizes average turnaround time if job lengths are known. Shortest Time to Completion First (STCF) is a preemptive version of SJF. Round Robin (RR) assigns each process a short time slice in turn, improving response time but potentially increasing turnaround time.

Finally, I learned about Multi-Level Feedback Queue (MLFQ), which adjusts process priorities over time. Short or interactive jobs stay at a higher priority, while longer CPU-bound jobs move lower. To prevent starvation, MLFQ boosts all jobs back to high priority from time to time.

Monday, June 30, 2025

Week 1

 Week 1 (5/25-6/1)

What did I learn in the first week at CST334: Operating Systems?

This week, I read the required textbook chapter and learned a lot about low-level languages. The chapter also included a brief overview of the history of operating systems, tracing their origins over time. C is a new language for me—my experience has been mostly in object-oriented programming (OOP), so working without objects and instead using structs feels like rewiring everything I know. Fortunately, learning C is directly helping me in my internship, where I’m learning about binary analysis and symbolic execution using angr. These techniques allow users to analyze and manipulate binary code, explore execution paths, and identify vulnerabilities. Working with this powerful Python-based framework is especially useful when analyzing compiled C code without access to the source.

Learning about WSL, Docker, and Ubuntu has also been extremely helpful in understanding the tech stack used in my internship, along with the many tools Linux provides. The software engineers I work with recommend Linux for its security, stability, and seamless development workflow. The buffer overflow exercises in Lab 1 and the memory allocation practice in PA1 were especially effective for deepening my understanding of memory. I also saw how temporary fixes can sometimes work, but using ```malloc``` is a better long-term solution for managing memory.

Perhaps the most valuable takeaway this week was learning how to use VS Code (a Windows IDE) to write and debug code that will be deployed on a Linux server—all within a Windows environment using WSL and Ubuntu. At first, I downloaded gcc.exe through MinGW-w64, thinking I needed it to compile C in VS Code using the C/C++ extension. However, my professor explained that this could interfere with Docker compilation and why it’s important to avoid that. I’ve since disabled the extension to prevent VS Code from adding Windows-specific headers like ```#include "windows.h"```, which would cause compilation issues. So far, I’ve really enjoyed this class and look forward to learning more.


Thursday, June 19, 2025

Week 8

 Week 8 (6/19-6/20)

Last week for Intro to Database Systems, what?😧

Briefly summarize what you consider to be the three (3) most important things you learned in this course

1. Queries, especially joins, how to merge data from different tables and subselect. As well as learning how to perform CRUD operations, including select, insert, update, and delete.

2. Reverse and forward engineering, being able to generate an ER diagram based on a database, or creating an ER diagram and generating the database are such great features. After I learned this, I saved so much time, especially in a test. I used to design ER diagrams or wireframes on Lucidchart, which is not as efficient after using this MySQL feature

3. The JDBC API to create a webapp is probably the most useful real-world project, because we get to use Spring boot (open-source framework), mySQL and MongoDB to create a web app. I did something similar before using SQLite and Tomcat but Spring was more compact to use. I also love how we use the same lab to compare the functionality of MySQL and MongoDB and were able to see the differences (pros and cons).


Tuesday, June 17, 2025

Week 7

 Week 7 (6/11-6/17)

Comparison between MongoDB and MySQL. 

What are some similarities? 

MongoDB and MySQL both support indexing, sharding, replication, and unique keys to help manage and access data efficiently. Each one also allows users to select specific fields in a query (called projection) and provides an “explain” tool to see how a query runs.

What are some differences? 

MySQL is a relational database that utilizes SQL, supports joins, and provides full transaction support, ideal for structured data and complex queries. MongoDB is a NoSQL database that stores flexible, JSON-like documents. It doesn’t support joins or full transactions, but it is easier to scale and better for unstructured or changing data.

When would you choose one over the other?  

Choosing MySQL for strict data structure (tables with clear columns), when we need transactions or joins or when strong consistency is needed, MySQL is the best choice. Choosing MongoDB when flexibility and scalability are required, such as when data is nested (like JSON), when easy scaling across servers is needed, and when speed is crucial, is when MongoDB is a better choice.





Tuesday, June 10, 2025

Week 6

 Week 6 (6/4-6/10)

Summary of this week's learning:

For lab 17, I learned to open a JDBC connection to course DB, turned off auto-commit, used a preparedStatement to insert the new student (with tot_cred = 0) and run executeUpdate() and print how many rows change, use another PreparedStatement to select id and name for that department and print each. And on success, call commit(); on error, call rollback() and close the connection.

For labs 18 and 19, I learned how to use Reverse Engineering and Forward Engineering to create a schema and an ER diagram. I learned how to use the model in MySQL and some other features to establish relationships between tables, and conducted an assessment or peer review with the other two classmates, where I was able to see different ER diagrams and designs and determine if they met the requirements. For Lab 19, I collaborated with my group to create a web application using Spring Boot, a Java-based framework, building upon the concepts introduced in Lab 18 (Prescription). I worked on ControllerPrescriptionCreate and filled out the forms with both existing ("lisinopril") and non-existent drugs ("funny pills").


Tuesday, June 3, 2025

Week 5

Week 5 (5/28-6/3)

The web site "Use the Index Luke" has a page on "slow indexes".   https://use-the-index-luke.com/sql/anatomy/slow-indexesLinks 

If indexes are supposed to speed up performance of query,  what does the author mean by a slow index?

Based on the site, a “slow index” isn’t a broken index; it’s simply when using an index ends up doing more work than expected, sometimes even more than scanning the entire table. After the database finds the starting point in the B-tree, it may need to traverse a chain of leaf nodes to gather all matching entries (for instance, when looking up a value “23” that appears in multiple locations). If many rows share the same key, the database follows the leaf-node chain and may read dozens or hundreds of index pages just to collect all pointers to matching rows.

Once those index entries are found, the database typically retrieves each matching row from the table one by one. If those rows are scattered across different data pages, each lookup can trigger a random I/O. For example, if your query matches 200 rows, that could mean 200 separate reads to fetch the actual data, resulting in a slower process than a simple full-table scan. In short, the B-tree lookup itself is fast, but the combined cost of following many leaf pointers plus fetching each row individually can make an “indexed” query slower than you’d expect.

Week 3

  Week 3 (7/9-7/15) What did I learn this week in CST 334? I learned different concepts this week. The book and lectures teach how computers...