Craig Matthew Files
Craig M. Files, Ph.D.
Papers Written:
MDD with Added Null-Value and All-Value Edges Applied to
Multi-Valued Domino Logic Gates
Craig M. Files and Mark H. Nodine
Journal of Multiple-Valued Logic and Soft Computing, 2008.
Abstract:
This paper presents a Multi-valued Decision Diagram (MDD) that has
additional
null-value and all-value edges. The MDD is based on a multi-valued algebra
that augments multi-valued variables to allow a null output value. A null
output value represents the lack of any valid value for a given input
combination (akin to an output don't care, but its value cannot be changed).
A side effect of the null-value edge is that the MDD is capable of
representing 1-valued variables. The null-value is also used in
representing
mutexes (a concept that defines an input don't care condition).
While there have been publications that have created binary decision
diagrams
with a third (all-value) edge, to the authors' knowledge, this is the first
time an all-value edge has been added to an MDD. This MDD data structure is
the foundation for Intrinsity's multi-valued logic synthesis algorithms.
Including using the MDD to functionally represent multi-valued domino logic
gates.
MDD with Added Null-Value and All-Value Edges
Craig M. Files and Mark H. Nodine
IEEE International Symposium on Multi-Valued Logic, May 2008.
Abstract:
This paper presents an MDD that has additional null-value and all-value
edges.
The MDD is based on a multi-valued algebra that augments multi-valued
variables to allow a null-output value. A null-output value is a value that
cannot be computed but represents the lack of any valid value for a given
input combination (akin to an output don't care, but its value cannot be
changed). A side effect of the null-value edge is that the MDD is capable
of
representing 1-valued variables. Plus, the null-value is used in
representing
mutexes (a concept that defines an input don't care condition).
While there have been publications that have created BDDs with a third
(all-value) edge, to the authors' knowledge, this is the first time an
all-value edge has been added to an MDD. This MDD data structure is the
foundation for Intrinsity's multi-valued logic synthesis algorithms.
A Mature Methodology for Implementing Multi-Valued Logic in Silicon
Mark H. Nodine and Craig M. Files
IEEE International Symposium on Multi-Valued Logic, May 2008.
Abstract:
This paper gives an overview of methods proposed for
implementing multi-valued logic in CMOS and then describes
Intrinsity's patented Fast14 Technology as a mature method-
ology for silicon implementation of multi-valued logic. To
the authors? knowledge, no previous method of implementing
multi-valued logic has been demonstrated with a design of
the complexity of a microprocessor core. Fast14 Technology
is based upon three fundamental characteristics including
the use of (1) footed NMOS transistor domino logic, (2)
multi-phased overlapping clocks, and (3) 1-of-N encoding of
MVL signals. To provide additional opportunities for power
optimization, the concepts of null value and mutex properties
are introduced, presenting additional challenges for MVL
representation and synthesis.
Decision Diagrams in Machine Learning: An Empirical Study on Real-Life
Credit-Risk Data
Christophe Mues, Bart Baesens, Craig Files, Jan Vanthienen
Diagrams 2004: 395-397
Decision Diagrams in Machine Learning: an Empirical Study on Real-Life
Credit-Risk Data
Christophe Mues, Bart Baesens, Craig Files, Jan Vanthienen
Expert Systems with Applications, September 2004.
Abstract:
Decision trees are a widely used knowledge representation in machine
learning. However, one of their main drawbacks is the inherent replication
of isomorphic subtrees, as a result of which the produced classifiers
might become too large to be comprehensible by the human experts that have
to validate them. Alternatively, decision diagrams, a generalization of
decision trees taking on the form of a rooted, acyclic digraph instead of
a tree, have occasionally been suggested as a potentially more compact
representation. Their application in machine learning has nonetheless been
criticized, because the theoretical size advantages of subgraph sharing
did not always directly materialize in the relatively scarce reported
experiments on real-world data. Therefore, in this paper, starting from a
series of rule sets extracted from three real-life credit-scoring data
sets, we will empirically assess to what extent decision diagrams are able
to provide a compact visual description. Furthermore, we will investigate
the practical impact of finding a good attribute ordering on the achieved
size savings.
A New Functional Decomposition Method As Applied to Machine Learning and
VLSI Layout
C. Files
Ph.D. Dissertation, Portland State University, Portland Oregon, June 2000.
(235 pages)
The dissertation is available in the following formats:
dissertation.ps - Original formating (235
pages)
dissertation.ps.gz - Original formating
(235 pages)
small_dissertation.ps - A single
spaced version with extended margins (185 pages)
small_dissertation.ps.gz - A
single spaced version with extended margins (185 pages)
Abstract:
In the 1953 Ashenhurst created a multi-level logic minimization tool, called
functional decomposition. Even though functional decomposition was first
created to reduce the complexity of switching networks, it has been found to
have many other applications, including pattern recognition, algorithm
creation, data compression, machine learning and logic minimization. The
difficulty is that many of the steps are exponential in complexity, limiting
the size of functions which can be decomposed in a reasonable amount of time.
This dissertation explores these complexity problems and presents a new
functional decomposition methodology that reduces several of them.
The most important issue related to analysis, testing, and implementation of
discrete functions used for logic synthesis and functional decomposition is
representation. Because size complexity can hinder the runtimes of any
algorithms that use them as their data structures, this dissertation uses
decision diagrams for representation. There are two main reasons for using
decision diagrams, ability to represent many functions with less than
exponential size, and the canonical functional properties that exist when the
variables in a diagram are ordered. Consequently, this dissertation explores
the implementation of two types of decision diagrams and compares the two.
Also presented are several new algorithms based on decision diagrams.
The functional decomposition methodology presented here is applied to VLSI
circuit layout and to machine learning problems. The results show that
functional decomposition is a proper methodology for solving these two very
different problems.
New Multi-Valued Functional Decomposition Algorithms Based on MDDs
C. Files and M. Perkowski
IEEE Transactions on Computer Aided Design , September 2000.
Abstract:
This paper presents two new functional decomposition partitioning algorithms
that use multi-valued decision diagrams. Multi-valued decision diagrams are an
exceptionally good representation for generalized decomposition because they
are canonical and they can represent very large functions. Algorithms developed
in this paper are for Boolean/multi-valued input and output,
completely/incompletely specified functions with application to logic
synthesis, machine learning, data mining and knowledge discovery in databases.
We compare the run-times and decision diagram sizes of our algorithms to
existing decomposition partitioning algorithms based on decision diagrams. The
comparisons show that our algorithms are faster and do not result in
exponential diagram sizes when decomposing functions with small bound sets.
New Functional Representation for the Decomposition of Machine Learning
Problems
C. Files
Third Oregon Symposium on Logic, Design and Learning, May 2000.
Abstract:
The central idea in machine learning is to gather information from a
given data set. This can be a very difficult task because practical databases
are usually very large. To address this difficulty, the assumption of
Occam's Razor (simplest is best) and explicit domain knowledge are
used to reduce the search space. Machine learning is not
an exact methodology or formal theory of learning, but is an attempt at
learning based on heuristics and probabilities. Machine learning is
becoming more and more important as computers provide relatively
inexpensive means to collect and store data. Plus, it has been shown by
Ross et al. that logic synthesis methods (specifically functional
decomposition) can be used for finding solutions in the machine learning
paradigm.
One problem in machine learning is the concept of missing attributes (input
don't know). The concept of a missing attribute is much like an input don't
care in logic synthesis. But, the concepts differ in that an input don't care
covers all possible values of the input, while an input don't know only
represents one possible value of the input. This paper presents an new
functional representation for functions with don't knows and extends the
representation to functional decomposition.
Implicit Algorithms for Multi-Valued Input Support Minimization
A. Mishchenko, C. Files, and M. Perkowski
Workshop "Boolesche Probleme", Freiberg, Germany, September 2000.
Abstract:
We present an implicit approach to solve problems arising in decomposition of
incompletely specified multi- valued functions and relations. We introduce a
new representation based on binary-encoded multi-valued decision diagrams
(BEMDDs). This representation shares desirable properties of MDDs, in
particular, compactness, and is applicable to weakly-specified relations with a
large number of output values. This makes our decomposition approach
particularly useful for data mining and machine learning.
Using BEMDDs to represent multi-valued relations we have developed two
complementary input support minimization algorithms. The first algorithm is
efficient when the resulting support contains almost all initial variables; the
second is efficient when it contains only a few. We propose a simple heuristic
measure to estimate the number of minimum support variables and choose which
algorithm to use. Numerical results over a set of multi- valued benchmarks
provide experimental evidence for the efficiency of our approach.
Minimization of Sum of Continuous-Input And/Or Implicant Expressions for
Data Mining Applications
M. Perkowski, L. Jozwiak, and C. Files
Workshop "Boolesche Probleme", Freiberg, Germany, September 1998.
Abstract:
In many Data Mining applications the attributes are continuous (continuous
input variables) and the decision values are a finite set of symbols
(multi-valued output variables). In this paper we give an efficient algorithm
for the minimization of Sum of Continuous-Input AND/OR implicants. Our
expressions generalize the well-known Disjunctive Normal Forms to three-level
structures with continuous inputs. They are compiled to rule-based programs.
The network minimization uses the conditional graph coloring approach to solve
the covering problem.
Multi-Valued Functional Decomposition as a Machine Learning Method
C. Files and M. Perkowski
IEEE International Symposium on Multi-Valued Logic, May 1998.
Abstract:
In the past few years, several authors have presented methods of using
functional decomposition as applied to machine learning. These authors
explore the ideas of functional decomposition, but left the concepts of
machine learning to the papers that they reference. In general, they never
fully explain why a logic synthesis method should be applied to machine
learning. This paper explores and presents the basic concepts of machine
learning, and how some concepts match nicely with multi-valued logic
synthesis, while others pose great difficulties to functional decomposition.
The main reason for using multi-valued synthesis is that many problems are
naturally multi-valued (i.e., values taken from a discrete set). Thus,
mapping the problem directly to a multi-valued set of inputs and outputs is
much more natural than encoding the problem into a binary form. The paper
also points out that any multi-valued logic synthesis method could be applied
to the machine learning problem. But, this paper focuses on multi-valued
functional decomposition because of its generality of minimizing a given data
set.
An Error Reducing Approach to Machine Learning using Multi-Valued
Functional Decomposition
C. Files and M. Perkowski
IEEE International Symposium on Multi-Valued Logic, May 1998.
Abstract:
This paper presents on the minimization of incompletely specified multi-valued
functions using functional decomposition. While functional decomposition was
originally created for the minimization of logic circuits, this paper uses the
decomposition process for both machine learning and logic synthesis of
multi-valued functions. As it turns out, the minimization of logic circuits
maps directly to the concept of "learning" in machine learning, by reducing
the complexity of a given data set. A main difference is that machine
learning problems normally have a large number of output don't cares. Thus,
the decomposition technique presented in this paper is structured to
decomposing functions with a large number of don't cares. Note, that there
have been several papers that have discussed the topic of using multi-valued
functional decomposition to functions with a large number of don't cares. The
novelty brought with this paper is that the proposed method is structured to
reduce the resulting "error" generated by previous approaches. "Error" is a
measure of how well a machine learning algorithm approximates the actual, or
true function.
Polarized Pseudo-Kronecker Symmetry with an Application to the Synthesis
of Lattice Decision Diagrams
B.T. Drucker, C.M. Files, M.A. Perkowski, and M. Chrzanowska-Jeske
Proc. ICCIMA'98 Conference, Feb. 1998, Australia.
Abstract:
Layout-driven logic synthesis combines logical and physical design to minimize
interconnect length for speed-, noise- and power-critical applications. The
lattice diagram synthesis approach constructs, for combinational functions,
regular lattices with only local connections and input buses. Lattice diagrams
are directly mappable to hardware without additional layout steps. This
synthesis approach decomposes functions symmetrically using Shannon and Davio
expansions and by repeating decomposition variables, when necessary. It has
been proven that lattice diagrams can always be constructed in this manner,
independent of input variable ordering and expansion types applied. Lattice
size, however, is quite sensitive to these parameters. This paper proposes new
types of symmetries of Boolean functions, and shows how they can be used in a
new approach to the important problem of finding variable-expansion orders for
lattices. Although this research originates from specific technological
considerations, a general problem in functional decomposition is solved: how to
decompose a function to a regular structure of simpler functions using
symmetries. It has also applications in functional decomposition and Machine
Learning.
Functional Decomposition of MVL Functions using Multi-Valued Decision Diagrams
C. Files, R. Dreshler, and M. Perkowski
IEEE International Symposium on Multi-Valued Logic, May 1997.
Abstract:
In this paper, the minimization of incompletely specified multi-valued
functions using functional decomposition is discussed. From the aspect of
machine learning, learning samples can be implemented as minterms in
multi-valued logic. The representation, can then be decomposed into smaller
blocks, resulting in a reduced problem complexity. This gives induced
descriptions through structuring, or feature extraction, of a learning problem.
Our approach to the decomposition is based on expressing a multi-valued
function (learning problem) in terms of a Multi-valued Decision Diagram that
allows the use of Don't Cares. The inclusion of Don't Cares is the emphasis
for this paper since multi-valued benchmarks are characterized as having many
Don't Cares.
Introduction to Exact Minimization of Networks with Complex Gates for
High-Performance Layout Generator - TROPIC
A. Reis, C. Files, M. Perkowski, M. Robert, and D. Auvergne,
Currently in revision, March 1996.
Exact Minimization of Networks with Complex Gates Using Terminal
Suppressed Binary Decision Diagrams and Dissected Pairs
C. Files, A. Reis, M. Perkowski, M. Robert, and D. Auvergne
Presented at the Workshop on Post-Binary Ultra-Large Scale Integration,
(Workshop that proceeded the 1996 International Symposium on Multi-Valued Logic)
Santiago de Campostela, Spain, May 28, 1996.
Introduction
This paper describes a top-end minimization algorithm/program called COMPLEX
that is used to optimize a complex gate network. The basis for this program,
is the use of a library-free layout generator, instead of a standard
pre-characterized library, such as the traditional layout synthesis method of
standard-cells. The library-free cells are made up of static CMOS complex
gates(SCCGs), where the quality of the final layout is based on the initial
network (input to the layout generator). That is, if the initial network is
given in a form that cannot be easily implemented in complex gates, then the
final layout will not be good. In this paper, a method that produces
"high-quality" initial networks, using a new approach based on minimizing the
complex gates is presented.
Minimization of Networks with Complex Gates
C. Files, A. Reis, M. Perkowski, M. Robert, and D. Auvergne
Workshop "Boolesche Probleme", Freiberg, Germany, September 1996.
Abstract:
This paper describes a top-end minimization algorithm that is used to optimize
a complex gate network. The basis for this approach, is the use of a
library-free layout generator, instead of a standard pre-characterized library,
such as the traditional layout synthesis method of standard-cells. The
library-free cells are made up of static CMOS complex gates, where the quality
of the final layout is based on the initial network (input to the layout
generator). That is, if the initial network is given in a form that cannot be
easily implemented in complex gates, then the final layout will not be good.
In this paper, we present a method that produces "high-quality" initial
networks, using a new approach based on minimizing the complex gates.
Mathematically, a complex gate is one that realizes a negation of an arbitrary
positive unate function. This paper describes a method of minimization of
single-output, two-level networks from unlimited complex gates, implemented in
the program COMPLEX. This program is used as a preprocessor to TROPIC, a
automatic layout generator. The presented method is applied to completely
specified functions to minimize the number of gates, variables, transistors and
values of other parameters.
A New Approach to Partition Selection in Ashenhurst-Curtis
Functional Decomposition
C. Files
Masters Thesis, University of Idaho, Moscow ID, August 1995. (108 pages)
Abstract:
In the 1950's Ashenhurst created a multi-level logic minimization
tool, called functional decomposition. Even though functional decomposition
was first created to reduce the complexity of switching networks, it has been
found to have many other applications, including pattern recognition, algorithm
creation, data compression, machine learning and logic minimization. The
difficulty is that many of the steps are exponential in complexity, limiting
the size of functions which can be decomposed in a reasonable amount of time.
This thesis reports on one particular step, partition selection, and presents
the previous attempts to reduce the complexity. Three new search algorithms
are presented that reduce the time complexity from exponential to a polynomial
time complexity. The first algorithm presented is based on using different
criteria than simply using column multiplicity to guide the search. The
motivation for this algorithm is the similarities between functional
decomposition and the construction of decision trees. This algorithm uses a
search heuristic that evaluates N N* ( )+ -12 1partitions out of a total
search space of 2N possible partitions. The second algorithm presented finds
decompositions with two inputs and one output. This algorithm is based on the
smallest representation of a function of N inputs is N-1 two input, one output
functions. The third algorithm is the combination of the other two algorithms.
Results of this algorithm show that it is highly effective in producing
minimized switching networks in a short amount of time.
Using a Search Heuristic in an NP-Complete problem in
Ashenhurst-Curtis Decomposition
C. Files
Graduate Student Final Report, AFOSR, Wright Laboratories, August 1994.
Abstract:
This paper reports on the application of several searching algorithms to
Ashenhurst-Curtis decomposition. A final search algorithm is then presented
that is base on using different values instead of just using column
multiplicity to guide the search. The influence behind this algorithm is the
analogous behavior of functional decomposition to decision trees. By using
this algorithm 2N possible partitions can be searched by evaluating N N* ( )+
12 partitions.