Very Large Scale Integration CAD — Part I — Logic

This is the summary of another course which I’ve passed on Coursera. Its name is VLSI CAD Part I: Logic. The author of the course is Rob A. Rutenbar from the University of Illinois at Urbana-Champaign. The course talks about combinational logic synthesis: logic functions’ minimization, optimizing logic schemes etc. There’s also the 2nd course talking about placement, routing and timing analysis.

Shannon Cofactors Fx Fx’

Suppose there’s a function with many inputs F(x, y, z, ...).

Shannon Expansion Theorem F = x * Fx + x' * Fx' where Fx = F(x=1) is called «positive cofactor» and Fx' = F(x=0) is called «negative cofactor».

Boolean difference dF/dx = Fx XOR Fx' answers the question «Does F change when x changes?».

Universal quantification (forany x F)[y z w ...] = Fx * Fx'. If there’s such a combination of inputs y, z, ... that the universal quantification of F is true, this means that for that combination of inputs F will be true for any x.

Existential quantification (exists x F)[y z w ...] = Fx + Fx'. If there’s such a combination of inputs y, z, ... that the existential quantification of F is true, this means that for that combination of inputs there is at least one x (either 1 or 0) for which F is true.

Network F(a, b) repair. Suppose we have a logical scheme of function F(a, b) which has a gate («a suspected gate») that appears to malfunction. Replace the suspected gate with a MUX having 2 select inputs (because F has 2 inputs a and b) and 4 data inputs, and connect through EXNOR gate with the known good version of logic Fgood(a, b). Now (forany ab F exnor Fgood)[d0 d1 d2 d3] will give you the right data input combination (d0…d3) for the MUX.

Cofactor of EXNOR = EXNOR of cofactors: F exnor G x = Fx exnor Gx and F exnor G x' = Fx' exnor Gx'.

Positional Cube Notation (PCN)

PCN is a representation of boolean functions that can be manipulated by software.

x  = 01
x' = 10
don't care = 11
                                    a  b  c     a  b  c
Example: F(a, b, c) = ab' + bc' = [01 10 11], [11 01 10]
Fa = [11 10 11], [11 01 10]
Fa' = [11 01 10]

Terminology: «cube» = «product term». For example ab' is a product term and [01 10 11] is a cube.

Tautology

A function F(x, y, z) is a tautology if it gives true for evert combination of inputs. The problem is how can one find out if a function is a tautology or not.

Function F(x, y, z) is a tautology if both its cofactors are tautologies: Fx = 1 and Fx' = 1.

Unate Recursive Paradigm (URP)

Unate function: function is unate if its SOP (sum of products) has each literal in exactly one polarity. Example of a unate function: F(a, b, c, d, e) = ab + ac'd + c'de'.
Otherwise the function is called binate. Example of a binate function: F(a, b, c, d, e) = ab + a'b + abc' + c'.

Unate function can be either positive unate or negative unate regarding variable x.
Function is positive unate in var x if x:0 --> 1 keeps F constant or makes it change 0 --> 1 in other words if the polarity of x in the SOP is positive.
Function is negative unate in var x if x:0 --> 1 keeps F constant or makes it change 1 --> 0 in other words if the polarity of x in the SOP is negative.

A function is a tautology if its SOP contains an all-don’t-cares cube (like this: [11 11 11 11]).
A function isn’t a tautology if it is unate and if its SOP doesn’t contain an all-don’t-cares cube.

Unate Recursive Paradigm is a recursive algorithm for tautology checking of a function. Decompose the function into a sum of unate cofactors and check each cofactor for tautology (which is pretty easy — just compare all the cofactor’s cubes with 11).

  1. Recursive tautology checking: pick «the most binate variable» to make unate cofactors. Most binate var is a binate var with the most product terms dependent on it (just a matter of counting don’t cares 11 in the cubes).
  2. For each cofactor: if the cofactor is unate then check for tautology: for example [11 11 11] = tautology. Else if the cofactor is binate go to step 1 with this cofactor.

Unate Recursive Complement Algorithm

The goal is to invert a boolean function (find its cubelist).
The algorithm exploits the equality: ~F = x ~Fx + x' ~Fx'

cubeList Complement(cubeList F)
{
    if (F is simple and we can complement it directly )
        return (directly computed complement of F)
    else
    {
        let x = most binate variable for splitting
        cubeList P = Complement( positiveCofactor(F, x) )
        cubeList N = Complement( negativeCofactor(F, x) )
        P = AND(x, P)
        N = AND(x', N)
        return OR(P, N)
    }
}

Binary Decision Diagrams (BDD)

            x1            F = x1' x2 'x3 + x1 x2' x3' + x1 x2 x3
          /    \
       0 /      \ 1
        /        \
      x2          x2
   0 /  \ 1    0 /  \ 1
    /    \      /    \
   x3     x3   x3    x3
  / \     /|   |\    | \
 /   \   / |   | \   |  \
0     1 0  0   1  0  0   1
One function can generally generate many equivalent BDDs

Reduced Ordered BDD (ROBDD) is a minimal possible BDD for a particular function. It’s called canonical BDD as it’s unique for a particular function.

Merging equivalent leaves
Merging isomorphic nodes (i. e. with same variable and identical children)
Eliminating redundant tests

BDD Sharing: boolean function is just a pointer to the root node in a canonical graph data structure (ROBDD). There may be many functions pointing to different nodes of one and the same BDD structure called shared BDD. Sharing the same BDD saves us memory etc.

There may be logic operators dealing with BDDs. For example to get a BDD for function F = G & H given the BDDs for functions G and H we may call BDD operator&(const BDD& a, const BDD& b) and so on.

Different variable ordering makes different BDDs. Variable ordering selection heuristic: related inputs should be near each other in order; groups of inputs that can determine function by themselves should be (i) close together, and (ii) near the top of BDD. Example:

F = a1 * b1 + a2 * b2 + a3 * b3
Good ordering: a1, b1, a2, b2, a3, b3
Bad ordering: a1, a2, a3, b1, b2, b3

Software utility: KBDD (eval, verify, satisfy, etc.)
Satisfiablity problem is easily solved with BDDs (if there is a path from 1 to the top of BDD).

SAT — Satisfiablity

Find a combination of inputs that makes F(x, y, z, w, …) = 1 or prove that there’s no such combination.
Usage: network repair: find d0, d1, d2, d3 MUX data inputs such that for any a, b exnor == 1 …gives us Z(d0, d1, d2, d3) function which is probably satisfied for some set of data inputs d0, d1, d2, d3.

Conjunctive Normal Form (CNF) — Product of Sums (POS) — allows us to solve SAT problems efficiently.
Example of CNF: F = (a + c)(b + c)(a' + b' + c'). Each sum (clause) with respect to SAT can be either «satisfied» or «conflictling» or «unresolved».
How to find CNF if you have a truth table. Example:

a  b  c   F  F'
0  0  0   0  1
0  0  1   1  0
0  1  0   0  1
0  1  1   1  0
1  0  0   0  1
1  0  1   0  1
1  1  0   1  0
1  1  1   0  1

1) find NOT(F) truth table (see F' column above)
2) write F' in the SOP (sum of products) form: F' = a'b'c' + a'b c' + a b'c' + a b' c + a b c = a'c' + a b'
3) invert F': NOT(F') = NOT(a'c' + a b') = NOT(a'c')NOT(a b') = (a + c)(a' + b)

Boolean Constraint Propagation (BCP) is an algorithm for solving SAT problem. Build a BDD recursively until you find a branch leading to 1. The main thing is how one chooses variable order for the BDD. DPLL algoritm exploits several things to choose the variable ordering: 1) CNF form 2) Unit clause 3) Pure literal — see the Wikipedia article above.

Software utilities: MiniSAT, CHAFF, GRASP

Gate consistency function

Gate consistency function concept helps us get CNF from a gate-level logic scheme. This CNF can then be used for SAT solving.
Example: suppose we have a NOT gate z = NOT(x) where x is the input and z is the output of the gate. Gate consistency function Ф(x, z) = z EXNOR x' = (x + z)(x' + z') — it gives true if and only if z = NOT(x). Then CNF for SAT solving will be z Ф(x, z) = z(x + z)(x' + z').

Make CNF out of gate logic scheme using gate consistency functions

There are simple formulas for getting CNF for all common logic gates: NOR, OR, NAND, AND, NOT.
Consistency function for a boolean logic network: give names to all nodes and build a product of all consistency functions for all nodes.
SAT function for a boolean logic network equals its consistency function times its output.

Minimization of 2-level logic network. Reduce-Expand-Irredundant algorithm.

Kmap. Prime implicants (primes) — cubes that are as big as possible

«Prime implicant» or just «a prime» — a cube (product term) that cannot be made any bigger. Solution is «irredundant» if we cannot remove a single cube from any single implicant.

Reduce-Expand-Irredundant Optimization Loop. We are iteratively going through these steps and finding prime and minimal and irredundant covers.

Reduce-Expand-Irredundant algorithm:

  1. Start with some arbitrary cover of the Karnaugh map.
  2. Expand each cube to be a prime (by removing literals from it and seeing if the cube is still covered by the function).
  3. Remove redundant cubes (cubes that are fully covered by other cubes).
  4. Reduce: shrink each cube as much as possible (some cubes may become nonprime).
  5. go to step 2

Software utility that implements Reduce-Expand-Irredundant algorithm: ESPRESSO.

Expansion algorithm
Expand a cube = remove literal(s) from it. Example:

ab'cd --> remove "c" --> ab'd --> remove "a" --> b'd

The same in the PCN notation:
 a  b' c  d        a  b'    d           b'    d
[01 10 01 01] --> [01 10 11 01] --> [11 10 11 01]

Assuming we have a function F.

  1. Build cube cover for F’ (use URP complement algorithm or a Karnaugh map) — this is called the «OFF set». Cube cover for F’ should not be overlapped by the cube being expanded.
  2. Pick a cube to be expanded.
  3. Build a «blocking matrix» (cube cover of zeroes in the Karnaugh map):
    row headers — literals in the cube that we want to expand
    column headers — cubes in F’
    put 1 in the cells where polarity of a [literal] is different from that in the [cube]
    Covering problem: find the rows that together touch all columns.
  4. Now you may remove from the cube that you want to expand all literals that are not in the cover.
Expansion algorithm

Multilevel logic

2-level logic is just a SOP form of a function. It has the smallest delay but it’s too complex as it has too many logic gates.
Multilevel logic network consists of many nodes each if which is a 2-level logic.
The measure of network complexity is the total number of literals in all nodes of the logic network. 2-level logic network may have for example 20 literals while an equivalent multilevel logic network mey have just 10 literals.

Converting 2-level logic to multilevel logic saves 6 literals

It’s all about extracting common divisors from several nodes. To extract common divisors from boolean functions we can consider a boolean function as a polynomial. To do that we must apply only those rules to our boolean functions that can be applied to polynimials. It turns out that to do that we need to consider different polarities of the same variable (x and x’) as totally unrelated i. e. we don’t make use of the fact that x*x' = 0 or x + x' = 1. This is why we get rid of the complements (x’) — replace them with new variables: F = ab + a'x + b'y. Let R=a' Let S=b'. F = ab + Rx + Sy.

Algebraic division algorithm F = Q*D + R where D is the divisor, Q is the quotient, R is the remainder.

Example of an algebraic division:

F = axc + axd + axe + bc + bd + de
D = ax + b
Q = ? R = ?

------------
    | ax   b
------------
axc |  c   -
axd |  d   -
axe |  e   -
bc  |  -   c
bd  |  -   d
de  |  -   -
------------

{c d e} inersect with {c d} = {c d}
Q = c + d
R = F - Q*D = axe + de
F = (c + d)(ax + b) + axe + de

Good divisors of function F must be looked for among the kernels of F

A quotient obtained by dividing F by a single cube is called a kernel if this quotient is cube-free. In this case the divisor is called a co-kernel. By definition a co-kernel is an expression which has only a single cube in it (for example: a, ab, abcd, xy, etc.) and division by which gives a kernel as a quotient. By-definition a cube-free expression has no divisors that leave no remainder (for example, ab + c is cube-free and ab + ac = a(b + c) isn’t).

Example of a function and its kernels and co-kernels:

F = abc + abd + bcd
Co-kernels: ab, b
Kernels:    c+d, ac + ad + cd

How to pick co-kernels: a potential co-kernel of F is a cube contained in at least two cubes of F.

If F has a kernel K2 that itself has a kernel K1 which has a kernel K0
then K1 and K0 are also kernels of F. So look for kernels recursively.

Finding Kernels

Brayton & McMullen theorem

Expressions F, G have a common multiple-cube divisor d
if and only if
there are kernels k1 of F and k2 of G
such that d = k1 inersect k2 (i. e. SOP form with common cubes in it)
and d is an expression with at least 2 cubes in it.

Extracting single-cube common divisors

Suppose we have two functions P and Q.

P = af + bf + ag + cg + ade + bde + cde
Q = af + bf + ace + bce

Make a cube-literal matrix:

---------------+-------------------------
               | Cubes from all kernels
Product terms  | a  b  c  d  e  f  g
---------------+-------------------------        
af             | 1  .  .  .  .  1  .
bf             | .  1  .  .  .  1  .
ag             | 1  .  .  .  .  .  1
cg             | .  .  1  .  .  .  1
ade            | 1  .  .  1  1  .  .
bde            | .  1  .  1  1  .  .
cde            | .  .  1  1  1  .  .     (*)
ace            | 1  .  1  .  1  .  .     (*)
bce            | .  1  1  .  1  .  .     (*)
                      (*)   (*)

Find prime rectangle cover (*) — a rectangle that cannot be made any bigger.
The common divisor is ce.
Count the number of literals saved.

X = ce
P = af + bf + ag + cg + ade + bde + dX
Q = af + bf + aX + bX

Extracting multiple-cube common divisors

Suppose we have two functions P and Q.

P = af + bf + ag + cg + ade + bde + cde
Q = af + bf + ace + bce

Find all kernels for P and Q.
Make a Co-Kernel—Cube Matrix:

---------------+-------------------------
               |  Cubes from all kernels
Kernels        | a  b  c  ce  de  f  g
---------------+-------------------------        
(P) de + f + g | .  .  .  .   1   1  1
(P) de + f     | .  .  .  .   1   1  .
(P) a + b + c  | 1  1  1  .   .   .  .     (*)
(P) a + b      | 1  1  .  .   .   .  .     (*)
(P) de + g     | .  .  .  .   1   .  1
(P) a + c      | 1  .  1  .   .   .  .
(Q) ce + f     | .  .  .  1   .   1  .
(Q) a + b      | 1  1  .  .   .   .  .     (*)
                (*)(*)

Find prime rectangle cover (*).
The common divisor is (a + b).
Count the number of literals saved.

X = a + b
P = X(f + de) + ag + cg + cde
Q = X(f + ce)

Looking for a prime rectangle cover

Rudell’s Ping Pong heuristic:

  1. Pick the best single row (the row which saves the greatest number of literals).
  2. Find other rows that have 1s in the same places and add them until you can’t find any more rows that can maximize the number of literals saved.
  3. Find other columns that have 1s in the same places add them until you can’t find any more that can maximize the number of literals saved.
  4. Goto step 2 util you cannot grow the rectangle any more.

Implicit Don’t Cares (DCs)

There may be combinations of input variables that can never happen for some reasons. For these inputs we are free to choose any value (1 or 0) for our logical function whichever gives us the simplest boolean expression.

Implicit Don’t Cares

Make simplifications of the logical expressions in your boolean logic network based on the so called «implicit don’t cares» that exist in every multilevel logic network. You can set 0 or 1 to the Karnaugh map cell that contains a don’t care whichever halps you to minimize the function.

Using Don’t Cares to simplify logic expression

Satisfiability Don’t Cares (SDC) — impossible combinations of inputs and outputs of a specific network node.
Example: node X

X = a + b
SDC(X) = X ^ (a + b) = X'a + X'b + Xa'b'

Controllability Don’t Cares (CDC) — patterns that cannot happen at inputs to a network node.
Suppose node F = F(x1, x2, ..., xn)
CDC(F) = (for any var not used in F)[SDC(x1) + SDC(x2) + ... SDC(xn)]
(Univerally quantify away all variables that are not used in the node F)

Observability Don’t Cares (ODC) — patterns input to node that make network outputs insensitive to output of the node.
We have node F(a, b) and node Z(a, b, c, F).
ODC(F) = patterns of (a, b) that make Z insensitive to F's value i.e. dZ/dF=0.
ODC(F) = (for any var not used in F)[dZ/dF=0]

If there are many nodes Z1, Z2, …, Zn dependent on F:
ODC(F) = patterns that are unobservable at all outputs Z1, Z2, ..., Zn.

Observability Don’t Cares

Добавить комментарий

Ваш адрес email не будет опубликован.