Lecture 1: Introduction to Algorithms + First Problem (Stable Matching)
- Explains what an algorithm is and why we study algorithms (efficiency, correctness, problem-solving mindset).
- Introduces the TA–course matching problem as a motivating example.
- Defines “unstable pairs” and uses the idea of stability to show what a “good” matching should prevent.
|
Download PDF |
Lecture 2: Stable Matching — Moving Toward an Algorithm
- Builds the stable matching model more formally using preference lists for both sides.
- Develops the proposal-based process (courses propose to TAs) as a concrete algorithmic approach.
- Uses examples to illustrate how assignments change until stability is reached (no unstable pair remains).
|
Download PDF |
Lecture 3: Gale–Shapley Deferred Acceptance + Correctness Ideas
- Presents the Gale–Shapley (deferred acceptance) algorithm explicitly in pseudocode form.
- Discusses why the algorithm terminates and why the output is stable (proof ideas/structure).
- Explores “deceit/manipulation” scenarios (e.g., “Evil Alice”) to highlight strategic behavior concerns.
|
Download PDF |
Lecture 4: Running Time Analysis + Asymptotic Growth (Big-O Thinking)
- Compares empirical analysis vs mathematical models vs asymptotic analysis of algorithms.
- Introduces asymptotic order-of-growth and Big-O style reasoning (ignore constants/lower-order terms).
- Uses loop-based examples (e.g., searching for pairs/triples with sum zero) to show how n, n², n³ arise
|
Download PDF |
Lecture 5: Graph Basics + Representations
- Introduces graphs through applications (transportation, communication, etc.) and core terminology (nodes/edges).
- Compares adjacency matrix vs adjacency list (space/time tradeoffs).
- Covers cycles and trees as fundamental graph structures you’ll build on later.
|
Download PDF |
Lecture 6: Breadth-First Search (BFS)
- Defines BFS as exploring a graph “one layer at a time” from a start node (L0, L1, L2, …).
- Presents the queue-based BFS procedure that tracks explored status, distance, and parent pointers.
- Emphasizes BFS outputs (shortest-path distances in unweighted graphs and a BFS tree).
|
Download PDF |
Lecture 7: Connected Components + Bipartite Testing with BFS
- Uses graph traversal to find all nodes reachable from a start node and extend this to connected components.
- Introduces bipartite graphs and explains how BFS levels support bipartite testing.
- Connects bipartiteness to cycle structure (why odd cycles break bipartiteness; level-based reasoning).
|
Download PDF |
Lecture 8: Depth-First Search (DFS) + Timestamps
- Introduces DFS as exploring “deep first” with recursive (or stack-based) traversal.
- Presents the standard DFS framework with colors (WHITE/GRAY/BLACK) and discovery/finish times.
- Builds the DFS forest concept that supports later tools like edge classification and ordering.
|
Download PDF |
Lecture 9: Strongly Connected Components (SCCs) + Topological Sorting Setup
- Defines strong connectivity and strong components in directed graphs.
- Gives an SCC algorithm based on DFS finishing times and the reversed graph (DFS, reverse edges, DFS in decreasing finish time order).
- Connects DFS structure to DAG concepts and topological sorting ideas (ordering by dependencies).
|
Download PDF |
Lecture 10: DFS Edge Classification + Topological Ordering + Divide-and-Conquer (Merge Sort)
- Practices classifying DFS edges (back/forward/cross) to interpret structure in directed graphs.
- Works through topological ordering examples and what it means for a valid order to exist.
- Introduces divide-and-conquer and applies it to merge sort (divide, recurse, merge/combine).
|
Download PDF |
Lecture 11: Divide-and-Conquer (Counting Inversions & Closest Pair)
- Defines an inversion as a pair (i, j) where i < j but Ai > Aj, and the goal is to count inversions efficiently.
- Uses a divide-and-conquer strategy (split, recursively count, then combine) to count “split inversions” during the merge step.
- Also applies divide-and-conquer to the closest-pair-of-points problem using a “strip” check (only a constant number of nearby candidates per point).
|
Download PDF |
Lecture 12: Sorting Lower Bounds + Master Theorem (Statement & Examples)
- Models comparison sorting with a decision/comparison tree where each leaf corresponds to one input ordering.
- Proves any deterministic comparison-based sorting needs Ω(n log n) comparisons (since there are n! orderings and the tree must have at least n! reachable leaves).
- Introduces the Master Theorem statement for recurrences of the form T(n) = aT(n/b) + f(n), with its 3 main cases.
|
Download PDF |
Lecture 13: Master Theorem (Proof via Recursion Tree)
- Restates the Master Theorem and its three-case breakdown for T(n) = aT(n/b) + f(n).
- Builds the proof intuition using recursion trees (tracking work per level and number of subproblems).
- Expresses total work as a level-by-level sum (a series over j = 0 to log_b n), which leads to the final asymptotic bounds.
|
Download PDF |
Lecture 14: Greedy Scheduling (Interval Scheduling & Interval Partitioning)
- Studies greedy algorithms through scheduling problems, focusing on picking locally best choices that lead to global optimality.
- Interval Scheduling: the earliest-finish-time-first (EFTF) algorithm is proven optimal (with a correctness proof by induction) and runs in Θ(n log n) due to sorting.
- Interval Partitioning: implements earliest-start-time-first using a priority queue keyed by current finish times, achieving O(n log n).
|
Download PDF |
Lecture 15: Single-Source Shortest Paths with Dijkstra’s Algorithm
- Formulates the single-source shortest path problem: compute shortest distances from a source s to all vertices in a weighted graph.
- Presents Dijkstra’s method: repeatedly extract the vertex with smallest tentative distance and relax its outgoing edges (works for nonnegative edge weights).
- Proves correctness via an induction-style argument that once a vertex is “settled,” its distance is final.
|
Download PDF |
Lecture 16: Minimum Spanning Trees (Cut Property, Kruskal, Prim)
- Defines spanning trees and the Minimum Spanning Tree (MST) problem: connect all vertices with minimum total edge weight.
- Uses the cut property idea (“safe” light edges across a cut) to justify greedy MST construction.
- Covers two classic greedy MST algorithms: Kruskal’s (add edges in increasing weight if no cycle) and Prim’s (grow one tree by repeatedly adding the cheapest crossing edge).
|
Download PDF |
Lecture 17: Dynamic Programming (Motivation, Fibonacci, Memoization)
- Defines dynamic programming as solving problems by combining solutions to smaller subproblems, typically by storing/reusing results.
- Shows naive Fibonacci recursion causes exponential blowup (≈ 2^n) due to repeated subcomputations.
- Introduces memoization: compute a subproblem once, store it, and look it up later to reduce time dramatically (e.g., Fibonacci becomes linear-time).
|
Download PDF |
Lecture 18: Weighted Interval Scheduling (DP Recurrence)
- Upgrades interval scheduling by adding weights/values to jobs and maximizing total value, not just the count of jobs.
- Builds the key recurrence using p(j) (the last job compatible with j): OPT(j) = max(vj + OPT(p(j)), OPT(j−1)).
- Translates the recurrence into a dynamic programming algorithm (after sorting by finish time).
|
Download PDF |
Lecture 19: Implementing DP (Memoization & Bottom-Up for Weighted Interval Scheduling)
- Starts from the WIS recurrence and implements it efficiently using memoization (top-down caching of OPT(j)).
- Also shows a bottom-up approach: fill OPT(1..n) iteratively so each OPT(j) is computed once.
- Analyzes total runtime as sorting + predecessor computation + DP evaluation (overall O(n log n) + O(n)).
|
Download PDF |
Lecture 20: Advanced DP (Segmented Least Squares) + Bellman-Ford
- Segmented least squares: partition points into line segments to minimize total error plus a fixed penalty C per segment.
- Uses a DP recurrence with an algorithm that tries all segment start points.
- Bellman-Ford: handles shortest paths even with negative edge weights by repeatedly relaxing all edges and can detect negative cycles.
|
Download PDF |
Lecture 21: Network Flow Basics — Max-Flow / Min-Cut Setup
- Wraps up quiz material (fractional knapsack idea: sort by value/weight and fill greedily; segmented least squares speed-up via precomputing errors).
- Introduces a flow network model (directed graph with capacities), with a source s and sink t, and the goal of sending as much flow as possible.
- Defines the two core constraints for feasible flow: capacity limits on edges and flow conservation at intermediate vertices (no “storing” flow), then motivates min-cut as a way to reason about optimality.
|
Download PDF |
Lecture 22: Residual Networks, Augmenting Paths, and Ford–Fulkerson
- Shows why a naive/greedy choice of an s–t path can get stuck before reaching maximum flow.
- Introduces the residual network idea (forward residual capacity + backward edges that “cancel” existing flow) to enable fixing earlier choices.
- Defines augmenting paths and uses them to build the Ford–Fulkerson method; connects this to the max-flow/min-cut relationship and demonstrates step-by-step augmentation.
|
Download PDF |
Lecture 23: Edmond–Karp Max Flow + Intro to Polynomial-Time Reductions
- Explains the Ford–Fulkerson runtime issue (can take many iterations depending on capacities / poor path choices).
- Presents Edmond–Karp: always choose the shortest augmenting path (by number of edges) using BFS, giving a polynomial-time bound.
- Shifts to computational intractability: classifies problem types (decision/optimization/search) and defines polynomial-time reductions to compare hardness, with example reductions (e.g., vertex cover vs independent set style relationships).
|
Download PDF |
Lecture 24: Efficient Certification and the Definition of NP
- Defines class P (decision problems solvable in polynomial time) vs class NP (decision problems verifiable in polynomial time).
- Explains certificates and verifiers with standard examples like Independent Set and Vertex Cover (what the certificate looks like and what the verifier checks).
- Discusses the big question P = NP? and briefly situates problems beyond NP (where even verification isn’t polynomial-time).
|
Download PDF |
Lecture 25: NP-Hard vs NP-Complete + SAT as the Starting Point
- Distinguishes NP-hard (everything in NP reduces to it) from NP-complete (NP-hard AND in NP).
- Outlines the standard blueprint to prove NP-complete: (1) show the problem is in NP (give certificate/verifier), (2) reduce a known NP-complete problem to it.
- Introduces SAT foundations: Cook–Levin (Circuit-SAT is NP-complete) and the route to 3-SAT NP-completeness (show 3-SAT ∈ NP and reduce Circuit-SAT to 3-SAT).
|
Download PDF |