next up previous
Next: Characterization of Benchmark Functions Up: APPLICATION OF ESOP Previous: ESOP Minimization for Machine

Numerical Results

This section compares the results of Espresso and Exorcism and gives a partial description of functions from the ``Learning Benchmark.'' In the first table, we show the On-Set, Don't-Care Set, the number of terms, the number of literals, the calculated DFC, the CPU time, and the average number of errors for a learning experiment for every function with Espresso. The second table is the same categories for EXORCISM.

The average error on the individual functions were calculated as follows. First, each method was given a random set of data to train on ranging from 25 to 250 out of a total of 256 possible cares. Once the method was trained, the entire 256 cases were tested and the number of differences were recorded as errors. This procedure was repeated 10 times for a given sample training size in intervals of 25. Thus, the total number of runs for each function was 100 of varying sample size. None of the learning was incremental. All of the runs were independent. For each function, the average number of errors for the entire run was recorded in the table.

Espresso generalizes as well as C4.5 - a main stream ML method[7]. It, however, does have a weakness when the function to be learned is most naturally represented with EXORs. This points out to the complementary nature of these two programs in the search of low DFC solutions. Lower combinational size complexity (DFC, gate count, Decision Tree node count, Decision Diagram size, etc) provides better generalization [15, 6, 19, 22]. There is greater than 0.9 correlation between complexity reduction and generalization performance for both SMOG (RODD size) and FLASH (DFC) [22]. In the two comprehensive tables, we provide some results for functions that do not have a low complexity SOP representation but do have a low complexity ESOP for total functions, such as parity and palindrome.


next up previous
Next: Characterization of Benchmark Functions Up: APPLICATION OF ESOP Previous: ESOP Minimization for Machine

Marek Perkowski
Tue Nov 11 17:11:29 PST 1997