\subsection{Multiple-Valued Decision Diagrams}
In this subsection we present multiple-valued decision
diagrams or MDDs \cite{srinivasan90}, a
graph-based representation for functions
on a finite domain.
The most common special
case of such diagrams is \emph{binary decision diagrams} (BDDs)
\cite{somenzi99}.
We consider a variable domain $D=\{d_1, \ldots, d_{m}\}$
of finite size $m$, and a range $R=\{r_1, \ldots, r_{n-1}\}$ of size
$n$. Functions of type $D^{k}\rightarrow R$ (for any $k$) can be represented
by \emph{reduced and ordered decision diagrams}. The principal benefits
of this representation are \emph{compactness}, since redundancy in the
function is leveraged to reduce the size of the data structure, and
\emph{canonicity}: each function is represented by a unique decision
diagram, and thus checking for equality can be done in constant time.
Decision diagrams are best explained via decision-tree representations
of functions.
The finiteness of $D$ allows the hierarchical decomposition of
a function in $k$ variables
into $m$ functions of $(k-1)$ variables; these can, in turn, themselves
be recursively decomposed, until 0-variable functions -- constants --
are reached. This process yields a \emph{decision
tree} representation of the function, and by the removal of constant nodes
and identification of identical nodes, the decision tree can be
transformed into a reduced decision diagram containing fewer nodes than
the full decision tree.
\begin{definition}
Let $f:D^{k} \rightarrow R=f(x_1, \ldots, x_k)$ be a function,
and $D=\{d_1, \ldots, d_m\}$.
For any $i, j$ with $0 \leq i < k, 0 \leq j < |D|$, the
\emph{$j$-th cofactor of $f$ with
respect to $x_i$} is:
$$f[x_i/d_j]=f(x_1, \ldots, x_{i-1}, d_j, x_{i+1}, \ldots, x_n)$$
Intuitively, this is the function in $k-1$ variables obtained
by assigning the value $d_j$ to the variable $x_i$, and allowing
all the other variables to remain free.
The vector of cofactors with respect to $x_i$ is
denoted by $\vec{f_{x_i}}$.
\end{definition}
So $f$ is represented by a {\bf case}-statement like construct:
$$f=case(x_i, f[x_i/d_1], \ldots, f[x_i, d_m])$$
If all of the cofactors with respect to a variable
$x_i$ of a function are identical, it is said to be
\emph{constant in $x_i$}. The set of all variables for
which a function is non-constant is known as its \emph{support}.
The decision tree produced by this recursive expansion has two
sources of redundancy: the first is
\emph{common cofactors}, where the same subtree
appears in two or more different places. Once these are identified
and merged,
it is possible to discover and remove
redundant comparisons -- these are \emph{constant nodes},
all of whose children
are identical. By merging common cofactors and removing constant nodes
until no further changes are possible, an ordered decision tree is
transformed into a reduced and ordered decision diagram.
\subsection{Syntax}
Syntactically a reduced and ordered decision diagram is a
graph with several constraints on its edges.
\begin{definition}
A $D$-input $R$-output decision diagram, in
the totally ordered set of variables $A=\{a_1, \ldots, a_k\}$ is a rooted
directed ayclic graph $(V \cup T, E)$ where:
\begin{itemize}
\item Each $v \in V$ is labelled with a variable from $A$; we denote this
variable by $\labl(v)$.
\item Each $t \in T$ is labelled with a value from $R$; we denote this value
by $\valu(t)$.
\item For each $v \in V$, there is a function $\child_v:V \times D \rightarrow
V \cup T$ which assigns, for every possible value of $\label_v$, a child
node corresponding to that input value. The edge $(u, u')$ is in $E$
if and only
if there is some $d \in D$ for which $\child_u(d)=u'$.
\end{itemize}
The decision diagram is called \emph{reduced and ordered} if:
$$\begin{array}{lr}
\forall d \in D, v \in V . \child_v(d)\in T \vee \labl(v) < \labl(\child_v(d))
& \mbox{Orderedness} \\
\forall v \in V. \exists d, d' \in D . \child_v(d) \neq \child_v(d') &
\mbox{Reducedness 1} \\
\labl(v')=\labl(v) \wedge (\forall d \in D . \child_v(d)=\child_{v'}(d)) \Rightarrow
v=v' & \mbox{Reducedness 2} \\
\end{array}
$$
For each node $v \in V$, the outgoing edge $(v, \child_v(d_1))$ is called
the \emph{low edge}.
\end{definition}
%\begin{definition}
%A \emph{reduced, ordered $D$-input $R$-output \emph{decision diagram} in $k$
%variables} is a tuple
%$U=(D,R,U_0, N, T, k, Lab, Val, Child)$ where:
%\begin{itemize}
%\item $D$ is a (finite) domain $\{d_1, \ldots, d_m\}$;
%\item $R$ is a range;
%\item $N$ is a set of nonterminal nodes;
%\item $T$ is a set of terminal nodes;
%\item $U_0$ is the \emph{root node};
%\item $Lab: N \rightarrow \{1, \ldots, k\}$ labels a nonterminal node with the number
%of one of the $k$ variables (this numbering gives a total ordering on
%the variables);
%\item $Val: T \rightarrow R$ labels a terminal node with an element of the
%range;
%\item $Child: N \times D \rightarrow (N \cup T)$: $Child(u, d_i)$ yields
%the $i$th child of node $u$.
%\item $Image: (N \cup T) \rightarrow 2^{R}$, for any node, gives the
%values of the terminal nodes reachable from it.
%\end{itemize}
Some special cases are widely used, and have their own names in
the literature: if both $D$ and $R$ are $\{0,1\}$, then these are
binary decision
diagrams \cite{bryant86,somenzi99}; if only $D$ is $\{0,1\}$, they are
called \emph{multi-terminal} BDDs (MBTDDs) \cite{fujita97} or algebraic
decision diagrams (ADDs) \cite{bahar93}.
%The following axioms describe the restrictions on a valid
%decision diagram, for all nodes $u,u' \in N \cup T$:
%$$\begin{array}{lr}
%\forall i . Child(u,i)\in T \vee Lab(Child(u,i))>Lab(U)
% & \mbox{Orderedness} \\
%(u \in N) \Rightarrow \exists i, j . Child(u,i) \neq Child(u,j) &
% \mbox{Reducedness 1} \\
%(Lab(u')=Lab(u) \wedge (\forall i \cdot Child(u,i)=Child(u',i))) \Rightarrow
% u=u' & \mbox{Reducedness 2} \\
%u \in T \Rightarrow Image(u)=\{Val(u)\} & \mbox{Image 1} \\
%u \in N \Rightarrow Image(u)=\bigcup_{1 \leq i \leq m} Image(Child(u,i)) &
% \mbox{Image 2} \\
%\end{array}
%By a slight abuse of notation, we will use $Child(u)$ to refer
%both to the node $u' \in N \cup T$ which it returns, and to
%the DD with $U_0=u$.
%\end{definition}
\subsection{Semantics}
NOT ALTERED YET.. WILL BE, DRASTICALLY.
A
%decision diagram $U$, in $k$ variables, denotes
%a function $\dbrkt{U}:D^{k}\rightarrow R$. This denotational
%semantics can be simply expressed in the following manner,
%which dictates computation by tracing a single path from root
%to leaf:
%$$
%\begin{array}{rclll}
%\dbrkt{U} & = & \lambda \vec{x}:D^{k}& \mbox{if} & \; (U_0 \in T) \; \mbox{then} \; Val(U_0) \\
%& & & \mbox{else} & \; case((\dbrkt{Child(U, 1)}, \ldots,
% \dbrkt{Child(U,m)}), x_{Label(U_0)})(\vec{x})
%\end{array}
%$$
%Intuitively, the value of the variable labelling the root node $U_0$
%(that is, $v_{Label(U_0)}$) is used to choose which of the child nodes to
%evaluate recursively; in this recursive evaluation, that child becomes
%the root node of a new, smaller DD which is evaluated in turn. The process
%bottoms out when a subdiagram with $U_0 \in T$ is evaluated.
The absence of cycles in the decision diagram guarantees that
function evaluation terminates in no more than $k$ steps.
It has been shown that the \emph{syntactic} properties
of reducedness and orderedness allow the \emph{semantic}
property of canonicity of a decision-diagram representation
of a function to be easily proven.
\begin{theorem}\cite{srinivasan90}
Reduced and ordered DDs are \emph{canonical}: that is,
if there are two nodes $U, U' \in (N \cup T)$ such that $\dbrkt{U}=
\dbrkt{U'}$, then
$U=U'$.
\end{theorem}