Very Large Scale Integration CAD — Part II — Layout

This is the summary of the course VLSI CAD Part II: Layout from the University of Illinois at Urbana-Champaign. The author of the course is Rob A. Rutenbar. Here’s the summary of the 1st course in the series.

Technology Mapping — transfroming boolean functions/networks into logic gates.

Standard Cell Library — a library which contains «standard cells» — logic gates and small logic elements (XOR, NOR, NAND, flip-flop, adder, etc.)

Standard Cell can be considered as a geometric container (black box with certain sizes) with some transistors and wires (at the topology level). Standard cells have all the same height but can have different widths. Same heights allow us to place cells in a row.

Netlist = gates + wires (nets)

Placement

Placement: netlist --> exact location of each gate.
Goal: minimize (optimize) the estimated (expected) length of wires used at the routing step.

Very simple model of a chip for placement: «chess board» (grid), where each gate takes exactly one cell; pins that connect gates are fixed at the edges.

2-point nets and multipoint nets.
Fanout — разветвление, when sigle net connects more than 2 gates.

How to estimate the length of a wire?
Most famous estimation algorithm: Half Perimeter Wirelength (HPWL) also known as Bounding Box (BBOX).
Put the smallest bounding box around all gates that are connected. Add width of the bounding box to its height — that will be the wire length estimate: max(x) - min(x) + max(y) - min(y).

2-point net. How to estimate wire length.
k-point net. How to estimate wire length.

Random Iterative Improvement Placement:

  1. Place randomly each gate to a free grid location.
  2. Pick a rundom gate pair and swap their locations. Update wirelength estimate incrementally (recalculate only those nets that may have changed).
  3. If wirelength estimate got bigger, undo the swap.
  4. Go to step 2 until wirelength estimate stops improving.
Iterative Improvement Placement Algorithm. One step: swap gates 2 and 4.

The issue with the previous algorithm is that we may stuck in a local minimum or wire length. A solution is to allow to increase wire length at some times.
Iterative Improvement with Hill Climbling:

  1. Place randomly each gate to a free grid location.
  2. Pick a rundom gate pair and swap their locations. Update wirelength estimate incrementally (recalculate only those nets that may have changed).
  3. If wirelength estimate got bigger, discard it or accept it randomly (the probability of acceptance depends on delta L: p = exp(-dL / T), where T is the «temperature»). if (uniform_random() < p) then ...
  4. Go to step 2 until wirelength estimate stops improving.

Simulated Annealing (placement algorithm insiperd by a physical analogy):
Crystal analogy. We want all the atoms to be lined up in crystal lattice sites (lowest energy state of the atoms). How to do it: get the material very HOT, so that the atoms have enough energy to move around even to bad places. Cool very, very slowly so that the atoms settle in good spots. The idea has a name: Metropolis algorithm:

while (wirelength decreases)
{
    Loop Iterative Improvement with Hill Climbling
   
    // Cool down slowly
    T = T * 0.90;
}

Net congestion (скопление проводов) — an estimate that shows how many wires (nets) use the same cell of the grid. How to calculate it:

congestion[W][H]; // W - grid width, H - grid height
for each net
{
    n = calculate the number of gates in the bounding box of the net;
   
    for each cell(x, y) in the bounding box of the net
    {
        congestion[x][y] += n ;
    }
}

Annealing makes congestion map homogenious. Annealing works well for 100K-500K gates but too slow for 1M+ gates.

Analytical placement. Quadratic wirelength model: for a 2-point net the quadratic length = (x1-x2)^2 + (y1-y2)^2. For a k-point net (a net that connects k gates) we consider the net as (k-1) + (k-2) + ... + 1 = k*(k-1)/2 wires that connect each gate with all other gates (fully connected net or «a clique»).
The estimated wirelength of a k-point net = Sum(quadratic length of each wire) / (k-1)

Partial derivative on xi and yi. Set of linear equations:

A * x = bx
A * y = by
A is a matrix N * N where N is the number of gates.

Connectivity matrix. C[i, j] = weight of the wire connecting gates i and j.
Weighted quadratic wire length. Each quadratic length has a different weight.
Connectivity matrix ignores the pads:

A-matrix[i, j] = -C[i, j] if not on the diagonal
A-matrix [i, i] = sum(C[row i]) + ith gate connected to the pad ? padwireweight_i : 0;
bx = ith gate connected to the pad ? padwireweight_i * pad_x : 0
The same for by
Analytical Placement Algorithm

A-matrix is sparse, symmetric, diagonally-dominant — positive semidefinite.
We use iterative approximate solvers that converge gradually to the solution.

Analytical Placement Recoursive Partitioning
Partitioning: how do we take into account the fact that each gate must be placed to a discrete cell of the grid, so its x and y cannot change smoothly and that each gate has a nonzero size and that gates cannot be on top of each other.
Recoursive partitioning: after solving a placement task divide the chip into several smaller placement tasks etc.

Partition: how do we divide the chip into new smaller placement tasks?
Assignment: which gates should go into each new smaller region?
Containment: formulate new Quadratic Placement matrix so gates stay in new regions with short wires (taking into account that some gates from one region have connections with gates from other regions).

«PROUD» algorithm of placement (Ren Song Tsay, Ernest Kuh et al.)
Partitioning: after the first placement is complete divide the chip into two parts (left and right).
Assignment: sort the gates by X coordinate. One half of the sorted gates goes to the left and the other half goes to the right.
Containment: for each gate from the left part which has a connection with another gate from the right part create a pseudopad at the boarder between the right and the left parts of the chip. Each pseudopad has Y coordinate which is equal to the Y coordinate of the gate from the right part. This is called pad propagation. We propagate not only pseudopads corresponding to the gates, we also propagate real pads if needed.
Proceed recoursively until the number of gates in one region reaches a small enough value (10…100).
Finally we need to force gates into precise rows (standard cell rows) — the respective algorithm step is called «Legalization». It has something to do with annealing but described very-very superficially in the course.

PROUD Algorithm of Placement

Technology mapping

Boolean nodes --> standard cells. Each standard cell has some cost.
Standard cell library may have complex gates in it (for example AOI21). AOI21 means AND has 2 inputs and there’s 1 more input for OR gate.

Complex Gate

Multilevel synthesis doesn’t produce logic gates and neither produces it standard cells. This is why it’s called uncommitted logic or technology independent logic — logic bubbles (each bubble is a sum of products — SOP). Each bubble is converted to a combination of NOT & NAND gates.

Convert Nodes to Gates

Tree covering problem. The logic network is represented as a tree of NAND and NOT gates. Standard cells are also represented in this same form (must be converted to a tree of NAND and NOT gates).
There can be many covers of the same tree. Peek one with the minimum cost.
Tree-ifying the netlist. A fanout — one gate output drives many other gates’ inputs. When we see a fanout we must split the tree into subtrees.

When we see a fanout we must split the tree into subtrees.

Some standard cells (gates) cannot be represented as a single tree because they have fanouts. So the cannot be matched with the target logic network tree and must be excluded from consideration. Example: EXOR.

Some standard cells (gates) cannot be represented as a single tree because they have fanouts.

Matching: NOT matches NOT, NAND matches NAND, input box matches any node, different patters may overlap (at input boxes).

Recoursive matching.
Matching the logic network tree with standard cells’ trees.
Try every library gate at each node of the subject tree.
Recoursive algorithm: match root of subject with root of pattern and than recoursively match children of subject to children of pattern.
(Asymmetric library patterns have two possible ways of matching!)
For each gate node of the subject tree we know which library pattern trees match that that node. We annotate each node of the subject tree with this matching information — list of library patterns that match that node. Problem: what is the best cover of the subject tree with patterns from target library?

Minimum cost tree covering:

mincost(subtree)
{
    cost = Infinity;
    foreach (P in patterns that match the root node of the subtree)
    {
        newcost = cost(P);
        foreach (L in leafnodes of P)
            newcost += mincost(L);
           
        if (newcost < cost)
            cost = newcost;
    }
}

The algorithm may revizit the same node many times, so we should keep a table with the mincost of each node (cache the mincost) as well as the pattern matching the node that leads to that mincost.

Tree Covering Problem

Routing

  • Internals of standard cells — metal layers #1-2.
  • Routing wires — layers #3-8.
  • Layers #9-10 — power and clock distribution.
  • On the very top — package interconnect (a large blob of metal, bold grid array).

Wires in different (adjacent) layers connect through «vias» (vertical joins) .

FEOL — front end of line group of layers. Transistors.
BEOL — back end of line group of layers. Wiring.

Maze routing (Maze — лабиринт).
Grid. Connect two grid cells. There are obstacles where we aren’t allowed to rout.
The grid is set up so that all geometry design rules are satisfied automatically.

References:

  • E. F. Moore «Path through a maze»
  • Chester Lee «Lee router»
  • Rubin
  • Hightower «Hightower router»

Expansion: source and target. Mark all the cells reachable from the source with path length = 1. Mark all the cells that are reachable from the cells previously marked with path length = 1. Do that until you reach the target. Diamond wavefront that goes around obstacles.
Backtrace: finding the shortest path: go from target to source in descending order.
Mark the cells that cannot be used again as obstacles (mark the newly placed wire as an obstacle).

Unit-Cost Maze Routing

Multi-point nets: one source and many targets.
Pick one point as source and label all other points as targets.
Use maze routing to find the nearest target and rout a wire to it.
Relabel all the routed cells as source.
Source can be not just one cell but a bunch of cells.
Expansion works for a multiple-cell source just well, though the wavefront is no longer a diamond.

Unit-Cost Maze Routing for Multipoint Net

Multi-point net routing is not optimal.
The shortest path is called «Steiner tree». Complexity of finding it is exponential.

Multi-level routing.
Now expansion is allowed into adjecent layers (3d grid).
Technology tells you at which grid cells you can place vias (via must cost more than one).

Nonuniform costs. Problem: there can be different paths with different costs between the same two cells.

We don’t need to store costs of all cells, we just need to store the costs of the recently labeled cells (the wavefront).
Cheapest cells first search — Dijkstra’s algorithm (the shortest path between two nodes in a graph).

  1. Find the cheapest cell C from the wavefront.
  2. Find the neighbors of cell C that you haven’t reached yet.
  3. Compute the cost of expanding to these new cells.
  4. Add these new cells to the wavefront data structure indexed by their newly computed pathcosts (this is how you keep cells sorted).
  5. Add to each of these new cells a pointer to cell C as the predecessor.
  6. Mark the routing grid to remember that we have reached these new cells.
  7. Remove cell C from the wavefront.
  8. Repeat with thr next cheapest cell on wavefront.
Nonunit Cost Maze Routing. 2 Layers.

A cell can either be «expanded» or «on wavefront» or «not reached yet».
Pointer to the predecessor is stored in the form of the direction from which the cell was reached: North, East, West, East, Up, Down. Because the predecessor is a neighbor of this cell in any case. 3 bits are enough to store this. (Up and Down are for adjacent layers).

Data structures.

  1. Routing grid.
    Cell costs.
    Predecessors.
  2. Reached or not reached.

  3. Wavefront.
    Cells (their coordinates) sorted (indexed) by their pathcosts.
    Pathcosts pathcost(N) = pathcost(C) + cellcost(N)

Heap (priority queue) is the data structure with the maximal (minimal is also supported) element always on top. O(log(N)) complexity for insert, remove operations.
Min heap in C++: priority_queue <int, vector<int>, greater<int> >

The Dijkstra’s algorithm relies on the «consistency assumption»: the cost of reaching a cell is independent of the path as a whole. It’s easy to define a cost scheme which breaks this assumption while still making perfect sense in a real router. Example: bend penalty — bending a wire costs more than having it straight.
You need to allow reviziting already reached cells. And keep in memory some limited number of alternative paths with different pathcosts so that when the target is reached you could peek the cheapest one. This is kind of an advanced topic.

Bend Penalty Example

Depth-first search. Previously we expanded wavefront in all directions. But why don’t expand preferrably in the direction from source to target (searching towards the target)? And can we still be sure that this will lead us to the minimum cost?
There’s a source-target box (a rectangle whoose corners are source and target). Let’s first search in that source-target box.
We define a «predictor» function that would estimate rougly the cost of path between two arbitrary cells.
Now, while a plane maze router adds a new cell C to the wavefront storing pathcost(source to C), a more sophisticated router stores pathcost(source to C) + predict_pathcost(C to target). This is called a «depth-first search» or «A* search».
Typical predictor function evaluates [Half Perimeter Wirelength (HPWL)] * [smallest cell cost].
Theorem: if predict_pathcost(C to target) < real_pathcost(C to target) then you are guaranteed to find the minimum cost path.
Using predictor just alters the order in which we expand cells: we expand cells that are closer to the target first (that have smaller predicted path cost).

Predictor Function

From Detailed routing to Global routing.
Detailed routing gives the exact final path of a wire.
Global routing breaks coarsely the chip into a number of large cells (for example 200 x 200 wires — called GBOX) and maze-routs the wire through these large cells. After that we do detailed routing inside those large cells.
Otherwise a typical chip would require 100 billion grid cells!
Different metal layers usually have different preferred wire direction. For example, layers 3, 5, 7, 9 — prefer vertical and 4, 6, 8, 10 prefer horizontal. If you need to bend — use a via — this makes it easier to embed lots of wires.

In global routing cells (GBOXes) have dynamic cost that remembers how many wires passed through the cell (the more — the higher the cost) and how many more can fit later.

Timing analysis

I give you:

  • Net list.
  • Timing model of gates and (after placement and routing) timing model of wires.

You tell me:

  • when signals arrive at various points in the network.
  • longest delays through gates network.
  • does the netlist satisfy the timing requirement (at what maximum clock frequency can it function).

Model of synchronous logic network: flip-flops --> combinational logic --> flip-flops. Flip-flops have common clock signal.

Synchronous Logic Network

Logic gate timing model.
What affects gate delay:

  • gate type (not, and, or, etc.)
  • how many other gates are driven by the gate’s output (load)
  • the shape of input signal (sharp edge or slow edge)
  • transition direction (rising or falling) because of the difference in pMOS and nMOS transistors speed
  • which input pin of the gate is considered (which one experiences transition)
  • delay is not fully deterministic, it’s statisticall (mean + 3*sigma is the conservative estimate of the delay)

Pin-to-pin delay: delay is a function of gate input and gate output.

Delay Depends On Input Pin and Output Pin

Topological timing analysis ignores what the logical function is and sees all gates as little circles each having some delay. This gives pessimistic results and may consider false paths but it’s practical.

Static timing analysis (STA).
Delay graph:

  • Wires are nodes
  • Gates are edges (cell arcs) that make the delay

There are also Source and Sink nodes that are abstract nodes added for convenience.
Source node is connected (with delay 0) to every primary input (input of the logic scheme that isn’t internal), Sink node is connected (with delay 0) to each primary output.

The question is: what is the longest path between the Source and the Sink?
Wire delays can also be represented as edges in the graph.

Node oriented timing analysis.
Arrival time at node n AT(n) — the worst (latest) time at which a signal arrives at a specific node. It’s the longest path from Source to node n.
Required arrival time at node n RAT(n) — the longest time the signal is allowed to become stable at node n. = clk period — longest path from node n to Sink.
Slack = RAT — AT. Positive Slack is good, negative Slack is bad.

Node n has neighboring nodes before it (predecessors) and after it (successors).

AT(n) = MAX(AT(pred(n)) + delay(pred(n), n))
RAT(n) = MIN(RAT(suc(n)) - delay(n, suc(n))

Theorem: sequence of nodes with minimal Slack comprise the longest path from Source to Sink.

Delay Tree

We want a list of paths in violation ordered from slowest to fastest.
We may want to exclude false paths from the list of paths that violate timing.

Calculating ATs.
Sort the nodes in topological order: the only rule is if p-->s then p goes before s.

AT(SRC) = 0
foreach(node n in topological order)
{
    AT(n) = -Infinity
    foreach(p in predecessors of n)
        AT(n) = max(AT(n), AT(p) + delay(p, n))
}

RAT(SNK) = ClockPeriod
foreach(node n in reverse topological order)
{
    RAT(n) = +Infinity
    foreach(s in successors of n)
        RAT(n) = min(RAT(n), RAT(p) - delay(n, s))
}

Theorem: for the longest path Slack is the same for all nodes.

After finding all the ATs and RATs and Slacks we want to list the worst paths in order from the longest one to the shortest one. The point is to find not all paths but just the worst ones! To do that we use kind of a maze routing algorithm but for now we apply it to the delay graph. Main features of this algorithm are expansion and priority queue (min heap where Slack is the key).

Enumerating Worst Paths

Interconnect timing. «Pi» model.
Wire: resistor rL/W between two capacitors cWL/2.
Where L — wire length, W — wire width.
Via: resistor.

Interconnect Delay

Model of a net, comprised of several wires is the «RC tree»
The gate that drives the net (root of RC tree) is modeled as voltage source with internal resistance, and the gates that are driven (leaves of RC tree) are modeled as capacitors.

RC-Tree

Elmore delay.
Set tau = 0.
Delay from the root to a leaf: at each resistor R do: tau = tau + R*sum(capacitors downstream).
«Downstream capacitor» is any capacitor reachable in tee below this resistor.

Elmore Delay

Добавить комментарий

Ваш адрес email не будет опубликован.