# Computing a Topological Ordering

## 原理

• $(v,w)$ 指节点 $v$ 指向节点 $w$
• 可以有多种排序方式

• Not every graph have a topological ordering.

• It is impossible to topologically order the vertices of a graph that contains a directed cycle.
• Directed cycles are the only obstruction to topological orderings (A directed graph without any directed cycles is called directed acyclic graph, or simply a DAG).
• Every DAG Has a Topological Ordering.
• A sink vertex is one with no outgoing edges.

## Pseudocode

The global variable curLabel keeps track of where we are in the topological ordering. Our algorithm will compute an ordering in reverse order, so curLabel counts down from the number of vertices to 1.

# Computing Strongly Connected Components

## 原理

A strongly connected component or SCC of a directed graph is a maximal subset $S \subseteq V$ of vertices such that there is a directed path from any vertex in $S$ to any other vertex in $S$. For example, the strongly connected components of the graph in Figure 8.16 are ${1, 3, 5}$, ${11}$, ${2, 4, 7, 9}$, and ${6, 8, 10}$. Within each component, it’s possible to get from anywhere to anywhere else (as you should check). Each component is maximal subject to this property, as there’s no way to “move to the left” from one SCC to another.

Intuitively, we want to first discover a “sink SCC,” meaning an SCC with no outgoing edges (like SCC#4 in Figure 8.16), and then work backward.

Our algorithm will use two passes of depth-first search. The first pass computes a magical ordering in which to process the vertices, and the second follows this ordering to discover the SCCs one by one. This two-pass strategy is known as Kosaraju’s algorithm.

With a directed acyclic graph $G$, the vertex positions constitute a topological ordering, and the vertex in the last position must be a sink vertex of $G$, with no outgoing edges. (Any such edges would travel backward in the ordering.) Perhaps with a general directed graph $G$, the vertex in the last position always belongs to a sink SCC? Sadly, no. If we label each SCC of $G$ with the smallest position of one of its vertices, these labels constitute a topological ordering of the meta-graph of SCCs defined in Proposition 8.9.

Proposition 8.9 (The SCC Meta-Graph Is Directed Acyclic):

Let $G = (V,E)$ be a directed graph. Define the corresponding meta-graph $H = (X, F)$ with one meta-vertex $x \in X$ per SCC of $G$ and a meta-edge $(x, y)$ in $F$ whenever there is an edge in $G$ from a vertex in the SCC corresponding to $x$ to one in the SCC corresponding to $y$. Then $H$ is a directed acyclic graph.

Theorem 8.10 (Topological Ordering of the SCCs):

Let $G$ be a directed graph, with the vertices ordered arbitrarily, and for each vertex $v \in V$ let $f(v)$ denote the position of $v$ computed by the TopoSort algorithm. Let $S1, S2$ denote two SCCs of $G$, and suppose $G$ has an edge $(v,w)$ with $v \in S1$ and $w \in S2$. Then,
$$\min {x \in S{1}} f(x)<\min {y \in S{2}} f(y)$$

Summarizing, after one pass of depth-first search, we can immediately identify a vertex in a source SCC. The only problem? We want to identify a vertex in a sink SCC. The fix? Reverse the graph first.

## Pseudocode

Implementation details:

1. A smarter implementation runs the TopoSort algorithm backward in the original input graph, by replacing the clause “each edge $(s, v)$ in s’s outgoing adjacency list” in the DFS-Topo subroutine with
“each edge $(v, s)$ in s’s incoming adjacency list.”
2. For best results, the first pass of depth-first search should export an array that contains the vertices (or pointers to them) in order of their positions, so that the second pass can process them with a simple array scan. This adds only constant overhead to the TopoSort subroutine (as you should check).

# Dijkstra’s Shortest-Path

## 原理

Dijkstra’s algorithm solves the single-source shortest path problem.

You should not use Dijkstra’s algorithm in applications with negative edge lengths.

Remember that breadth-first search computes the minimum number of edges in a path from the starting vertex to every other vertex. This is the special case of the single-source shortest path problem in which every edge has length 1. We saw in Quiz 9.1 that, with general nonnegative edge lengths, a shortest path need not be a path with the fewest number of edges. Many applications of shortest paths, such as computing driving directions or a sequence of financial transactions, inevitably involve edges with different lengths.

## Pseudocode

When all the edges have length 1, it’s equivalent to breadth-first search.

## 实现

heapq 的对象是列表，但是列表中的元素要是元组，如果子元素是列表，那么pop出的内容会是子列表的第一个元素。可以这样进行构造(a, [b, c])，就可以修改列表中的内容。在heappop之前，不需要对list进行初始化（heapify），但是如果对列表进行了修改，后面进行heappush之前要用heapify进行初始化。

#The Heap Data Structure

A heap is a data structure that keeps track of an evolving set of objects with keys and can quickly identify the object with the smallest key.

## Supported Operations

Basic Operations:

• Insert: given a heap $H$ and a new object $x$, add $x$ to $H$.
• Extract-Min: given a heap $H$, remove and return from $H$ an object with the smallest key (or a pointer to it).

Extra Operations:

• Find-Min: given a heap $H$, return an object with the smallest key (or a pointer to it).
• Heapify: given objects $x_1, . . . ,x_n$, create a heap containing them.
• Delete: given a heap $H$ and a pointer to an object $x$ in $H$, delete $x$ from $H$.

When to Use a Heap:

If your application requires fast minimum (or maximum) computations on a dynamically changing set of objects, the heap is usually the data structure of choice.

Whenever you see an algorithm or program with lots of brute-force minimum or maximum computations, a light bulb should go off in your head: This calls out for a heap!

## Application

### Median Maintenance

Recall that the median of a collection of numbers is its “middle element.” In an array with odd length $2k − 1$, the median is the kth order statistic (that is, the kth-smallest element). In an array with even length $2k$, both the kth and $(k + 1)$th order statistics are considered median elements.

For a less obvious application of heaps, let’s consider the median maintenance problem. You are presented with a sequence of numbers, one by one; assume for simplicity that they are distinct. Each time you receive a new number, your responsibility is to reply with the median element of all the numbers you’ve seen thus far.

Using heaps, we can solve the median maintenance problem in just logarithmic time per round. The key idea is to maintain two heaps $H1$ and $H2$ while satisfying two invariants. The first invariant is that $H1$ and $H2$ are balanced, meaning they each contain the same number of elements (after an even round) or that one contains exactly one more element than the other (after an odd round). The second invariant is that $H1$ and $H2$ are ordered, meaning every element in $H1$ is smaller than every element in $H2$.

For example, if the numbers so far have been $1, 2, 3, 4, 5$, then $H1$ stores $1$ and $2$ and $H2$ stores $4$ and $5$; the median element $3$ is allowed to go in either one, as either the maximum element of $H1$ or the minimum element of $H2$. If we’ve seen $1, 2, 3, 4, 5, 6$, then the first three numbers are in $H1$ and the second three are in $H2$; both the maximum element of $H1$ and the minimum element of $H2$ are median elements. One twist: $H2$ will be a standard heap, supporting Insert and Extract-Min, while H1 will be the “max” variant, supporting Insert and Extract-Max. This way, we can extract the median element with one heap operation, whether it’s in $H1$ or $H2$.

We still must explain how to update $H1$ and $H2$ each time a new element arrives so that they remain balanced and ordered. To figure out where to insert a new element $x$ so that the heaps remain ordered, it’s enough to compute the maximum element $y$ in $H1$ and the minimum element $z$ in $H2$. If $x$ is less than $y$, it has to go in $H1$; if it’s more than $z$, it has to go in $H2$; if it’s in between, it can go in either one. Except for one case:

## Implementation Details

There are two ways to visualize objects in a heap, as a tree (better for pictures and exposition) or as an array (better for an implementation).

### Heaps as Trees

A heap can be viewed as a rooted binary tree.

Duplicate keys are allowed. For example, here’s a valid heap containing nine objects:

Here’s another heap, with the same set of keys:

### Heaps as Arrays

In our minds we visualize a heap as a tree, but in an implementation we use an array with length equal to the maximum number of objects we expect to store.

The first element of the array corresponds to the tree’s root, the next two elements to the next level of the tree (in the same order), and so on (Figure 10.5).

### Implementing Insert in $O(log n)$ Time

Insert operation: given a heap $H$ and a new object $x$, add $x$ to $H$.

After $x$’s addition to $H$, $H$ should still correspond to a full binary tree (with one more node than before) that satisfies the heap property. The operation should take $O(log n)$ time, where $n$ is the number of objects in the heap.

Because a heap is a full binary tree, it has $\approx log_2 n$ levels, where $n$ is the number of objects in the heap. The number of swaps is at most the number of levels, and only a constant amount of work is required per swap. We conclude that the worst-case running time of the Insert operation is $O(log n)$, as desired.

### Implementing Extract-Min in $O(log n)$ Time

Extract-Min operation: given a heap $H$, remove and return from $H$ an object with the smallest key.

The challenge is to restore the full binary tree and heap properties after ripping out a heap’s root.

The number of swaps is at most the number of levels, and only a constant amount of work is required per swap. Because there are $\approx log_2 n$ levels, we conclude that the worst-case running time of the Extract-Min operation is $O(log n)$, where $n$ is the number of objects in the heap.

# Search Trees

A search tree, like a heap, is a data structure for storing an evolving set of objects associated with keys (and possibly lots of other data). It maintains a total ordering over the stored objects, and can support a richer set of operations than a heap, at the expense of increased space and, for some operations, somewhat slower running times.

## Sorted Arrays

A good way to think about a search tree is as a dynamic version of a sorted array—it can do everything a sorted array can do, while also accommodating fast insertions and deletions.

Supported Operations:

• Search: for a key $k$, return a pointer to an object in the data structure with key $k$ (or report that no such object exists). ( binary search)
• Min (Max): return a pointer to the object in the data structure with the smallest (respectively, largest) key.
• Predecessor (Successor): given a pointer to an object in the data structure, return a pointer to the object with the next-smallest (respectively, next-largest) key. If the given object has the minimum (respectively, maximum) key, report “none.”
• Output Sorted: output the objects in the data structure one by one in order of their keys.
• Select: given a number $i$, between 1 and the number of objects, return a pointer to the object in the data structure with the $i_{th}$-smallest key.
• Rank: given a key $k$, return the number of objects in the data structure with key at most $k$.

Unsupported Operations:

Insert: given a new object $x$, add x to the data structure.

Delete: for a key $k$, delete an object with key $k$ from the data structure, if one exists.

These two operations aren’t impossible to implement with a sorted array, but they’re painfully slow—inserting or deleting an element while maintaining the sorted array property requires linear time in the worst case. Is there an alternative data structure that replicates all the functionality of a sorted array, while matching the logarithmic-time performance of a heap for the Insert and Delete operations?

## Search Trees

The raison d’être of a search tree is to support all the operations that a sorted array supports, plus insertions and deletions. All the operations except Output-Sorted run in $O(log n)$ time, where $n$ is the number of objects in the search tree. The Output-Sorted operation runs in $O(n)$ time, and this is as good as it gets (since it must output n objects).

An important caveat: The running times in Table 11.2 are achieved by a balanced search tree, which is a more sophisticated version of the standard binary search tree. These running times are not guaranteed by an unbalanced search tree.

When to Use a Balanced Search Tree:

If your application requires maintaining an ordered representation
of a dynamically changing set of objects, the balanced search tree (or a data structure based on one) is usually the data structure of choice.

The Search Tree Property:

• For every object $x$, objects in $x$’s left subtree have keys smaller than that of $x$.
• For every object $x$, objects in $x$’s right subtree have keys larger than that of $x$.

For example, here’s a search tree containing objects with the keys ${1, 2, 3, 4, 5}$:

Binary search trees and heaps differ in several ways. Heaps can be thought of as trees, but they are implemented as arrays, with no explicit pointers between objects. A search tree explicitly stores three pointers per object, and hence uses more space (by a constant factor). Heaps don’t need explicit pointers because they always correspond to full binary trees, while binary search trees can have an arbitrary structure.

Heaps are optimized for fast minimum computations, and the heap property—that a child’s key is only bigger than its parent’s key—makes the minimum-key object easy to find (it’s the root). Search trees are optimized for—wait for it—search, and the search tree property is defined accordingly.

The height of a tree is defined as the length of a longest path from its root to a leaf.

### Implementing Search in $O(height)$ Time

Search operation: for a key $k$, return a pointer to an object in the data structure with key $k$ (or report that no such object exists).

1. Start at the root node.
2. Repeatedly traverse left and right child pointers, as appropriate (left if $k$ is less than the current node’s key, right if $k$ is bigger).
3. Return a pointer to an object with key $k$ (if found) or “none” (upon reaching a null pointer).

### Implementing Min and Max in $O(height)$ Time

Min and Max operations:

Min (Max): return a pointer to the object in the data structure with the smallest (respectively, largest) key.

1. Start at the root node.
2. Traverse left child pointers (right child pointers) as long as possible, until encountering a null pointer.
3. Return a pointer to the last object visited.

### Implementing Predecessor in $O(height)$ Time

The implementation of the Successor operation is analogous.

Predecessor: given a pointer to an object in the data structure, return a pointer to the object with the next-smallest key. (If the object has the minimum key, report “none.”)

1. If $x$’s left subtree is non-empty, return the result of Max applied to this subtree.
2. Otherwise, traverse parent pointers upward toward the root. If the traversal visits consecutive nodes $y$ and $z$ with y a right child of $z$, return a pointer to $z$.
3. Otherwise, report “none.”

### Implementing Output-Sorted in $O(n)$ Time

Output-Sorted: output the objects in the data structure one by one in order of their keys.

1. Recursively call Output-Sorted on the root’s left subtree.
2. Output the object at the root.
3. Recursively call Output-Sorted on the root’s right subtree.

For a tree containing $n$ objects, the operation performs n recursive calls (one initiated at each node) and does a constant amount of work in each, for a total running time of $O(n)$.

### Implementing Insert in $O(height)$ Time

Make changes to the tree.

Insert: given a new object $x$, add $x$ to the data structure.

1. Start at the root node.
2. Repeatedly traverse left and right child pointers, as appropriate (left if $k$ is at most the current node’s key, right if it’s bigger), until a null pointer is encountered.
3. Replace the null pointer with one to the new object. Set the new node’s parent pointer to its parent, and its child pointers to null.

### Implementing Delete in $O(height)$ Time

Delete: for a key $k$, delete an object with key $k$ from the search tree, if one exists.

1. Use Search to locate an object $x$ with key k. (If no such object exists, halt.)

2. If $x$ has no children, delete $x$ by setting the appropriate child pointer of $x$’s parent to null. (If $x$ was the root, the new tree is empty.)

3. If $x$ has one child, splice $x$ out by rewiring the appropriate child pointer of $x$’s parent to $x$’s child, and the parent pointer of $x$’s child to $x$’s parent. (If $x$ was the root, its child becomes the new root.)

4. Otherwise, swap $x$ with the object in its left subtree that has the biggest key (Predecessor), and delete $x$ from its new position (where it has at most one child). For example:

### Augmented Search Trees for Select

Select: given a number $i$, between $1$ and the number of objects, return a pointer to the object in the data structure with the $i$th-smallest key.

1. Start at the root and let $j$ be the size of its left subtree. (If it has no left child pointer, then $j = 0$.)
2. If $i = j + 1$, return a pointer to the root.
3. If $i < j + 1$, recursively compute the $i$th-smallest key in the left subtree.
4. If $i > j + 1$, recursively compute the $(i − j − 1)$th smallest key in the right subtree.

Because each node of the search tree stores the size of its subtree, each recursive call performs only a constant amount of work. Each recursive call proceeds further downward in the tree, so the total amount of work is $O(height)$.

## Balanced Search Trees

Balanced search trees guarantee $O(log n)$ height.

### Rotations

All the most common implementations of balanced search trees use rotations, a constant-time operation that performs a modest amount of local rebalancing while preserving the search tree property.

For example,

I encourage readers interested in what’s under the hood of a balanced search tree to check out a textbook treatment or explore the open-source implementations and visualization demos that are freely available online.

# Hash Table / Hash Map

The raison d’être of a hash table is to facilitate super-fast searches, which are also called lookups in this context. A hash table can tell you what’s there and what’s not, and can do it really, really quickly (much faster than a heap or search tree).

Supported Operations:

• Lookup (a.k.a. Search): for a key $k$, return a pointer to an object in the hash table with key $k$ (or report that no such object exists).
• Insert: given a new object $x$, add x to the hash table.
• Delete: for a key $k$, delete an object with key $k$ from the hash table, if one exists.

In a hash table, all these operations typically run in constant time.

When to Use a Hash Table:

If your application requires fast lookups with a dynamically changing set of objects, the hash table is usually the data structure of choice.

## Application

### De-duplication

When a new object $x$ with key $k$ arrives:

1. Use Lookup to check if the hash table already contains an object with key $k$.
2. If not, use Insert to put $x$ in the hash table.

### The 2-SUM Problem

Input: An unsorted array $A$ of $n$ integers, and a target integer $t$.

Goal: Determine whether or not there are two numbers $x, y$ in $A$ satisfying $x + y = t$.

OJ: 两数之和

Consider n people with random birthdays, with each of the $366$ days of the year equally likely. (Assume all $n$ people were born in a leap year.) How large does $n$ need to be before there is at least a $50%$ chance that two people have the same birthday?

The answer is $23$. 参考生日悖论

The birthday paradox implies that, even for the gold standard, we’re likely to start seeing collisions in a hash table of size n once a small constant times $\sqrt{n}$ objects have been inserted. For example, when $n = 10, 000$, the insertion of $200$ objects is likely to cause at least one collision—even though at least $98%$ of the array positions are completely unused!

## Collision Resolution

With collisions an inevitable fact of life, a hash table needs some method for resolving them.

Two dominant approaches, separate chaining (or simply chaining) and open addressing.

### Chaining

With chaining, the positions of the array are often called buckets, as each can contain multiple objects.

Chaining:

1. Keep a linked list in each bucket of the hash table.
2. To Lookup/Insert/Delete an object with key $k$, perform Lookup/Insert Delete on the linked list in the bucket $A[h(k)]$, where h denotes the hash function and A the hash table’s array.

Open addressing is much easier to implement and understand when the hash table must support only Insert and Lookup (and not Delete). With open addressing, each position of the array stores 0 or 1 objects, rather than a list.

Where do we put an object with key $k$ if a different object is already stored in the position $A[h(k)]$? The idea is to associate each key k with a probe sequence of positions, not just a single position.

The first number of the sequence indicates the position to consider first; the second the next position to consider when the first is already occupied; and so on. The object is stored in the first unoccupied position of its key’s probe sequence (see Figure 12.4).

1. Insert: Given an object with key $k$, iterate through the probe sequence associated with $k$, storing the object in the first empty position found.
2. Lookup: Given a key $k$, iterate through the probe sequence associated with $k$ until encountering the desired object (in which case, return it) or an empty position (in which case, report “none”).

There are several ways to use one or more hash functions to define a probe sequence. The simplest is linear probing. A more sophisticated method is double hashing.

#### linear probing

This method uses one hash function $h$, and defines the probe sequence for a key $k$ as $h(k)$, followed by $h(k)+1$, followed by $h(k)+2$, and so on (wrapping around to the beginning upon reaching the last position).

#### Double Hashing

A more sophisticated method is double hashing, which uses two hash functions. The first tells you the first position of the probe sequence, and the second indicates the offset for subsequent positions. For example, if $h_1(k) = 17$ and $h_2(k) = 23$, the first place to look for an object with key $k$ is position $17$; failing that, position $40$; failing that, position $63$; failing that, position $86$; and so on. For a different key $k^{\prime}$, the probe sequence could look quite different. For example, if $h_1(k^{\prime}) = 42$ and $h_2(k^{\prime}) = 27$, the probe sequence would be $42$, followed by $69$, followed by $96$, followed by $123$, and so on.

### What Makes for a Good Hash Function?

How can we choose a hash function so that there aren’t too many collisions?

As of this writing (in 2018), hash functions that are good starting points for further exploration include FarmHash, MurmurHash3, SpookyHash and MD5. These are all non-cryptographic hash functions, and are not designed to protect against adversarial attacks like that of Crosby and Wallach (see footnote 12).25 Cryptographic hash functions are more complicated and slower to evaluate than their non-cryptographic counterparts, but they do protect against such attacks. A good starting point here is the hash function SHA-1 and its newer relatives like SHA-256.

## 编程练习

The file contains 1 million integers, both positive and negative (there might be some repetitions!)

Your task is to compute the number of target values t in the interval $[-10000,10000]$ (inclusive) such that there are distinct numbers $x,y$ in the input file that satisfy $x+y=t$.

# Bloom Filters

Bloom filters are close cousins of hash tables. They are ridiculously space-efficient but, in exchange, they occasionally make errors.

The raison d’être of a bloom filter is essentially the same as that of a hash table: super-fast insertions and lookups. Why should we bother with another data structure with the same set of operations? Because bloom filters are preferable to hash tables in applications in which space is at a premium and the occasional error is not a dealbreaker.

Like hash tables with open addressing, bloom filters are much easier to implement and understand when they support only Insert and Lookup (and no Delete).

Supported Operations:

Lookup: for a key $k$, return “yes” if $k$ has been previously inserted into the bloom filter and “no” otherwise.

Insert: add a new key $k$ to the bloom filter.

Bloom Filters Vs. Hash Tables:

1. Pro: More space efficient.
2. Pro: Guaranteed constant-time operations for every data set.
3. Con: Can’t store pointers to objects.
4. Con: Deletions are complicated, relative to a hash table with chaining.
5. Con: Non-zero false positive probability.

When to Use a Bloom Filter:

If your application requires fast lookups with a dynamically changing set of objects, space is at a premium, and a small number of false positives can be tolerated, the bloom filter is usually the data structure of choice.

Spell checkers.