BACK TO MAIN PAGE OF PROFESSOR Marek Perkowski

NEW SEMINAR ``LOGIC AND DATA MINING''




We decided to join forces and create a common regular "working seminar" series for graduate students and faculty.

The seminar will be devoted to problems of Data Mining
and Machine Learning, with special attention paid to inductive
logic methods, such as Constructive Induction or Inductive Logic Programming.

Data Mining methods are applicable to finding any kind of new information
in data bases and are recently one of the most hot topics in industry,
with applications in medicine, law, banking, finance, insurance,
stock market, military, and many other.

Interestingly, many of Data Mining approaches use techniques that are quite
similar to those developed in the last 10 years in the area of
Electronic Design Automation, which caused our initial interest in this subject.


The seminars will take place on Thursdays 4 p.m. - 6 p.m., room SA 54. (in SEAS ANNEX Building).

They will be devoted to:
- presentation of general background information on logic and other methods in Data Mining,
- presentation of original research ideas and applications topics,
- presentation of new papers and reports of other researchers, books' discussions.
- discussion of students' work.

The first meeting will be held on Thursday, October 8-th.

Marek Perkowski will present the first talk
in the series - ``Introduction to Constructive Induction Approach to Data Mining''.

Everybody is welcome.


Malgorzata Chrzanowska-Jeske

Douglas Hall,

Marek Perkowski


DECOMPOSITION OF MULTIPLE-VALUED RELATIONS FOR DATA MINING AND VLSI DEDESIGN


Speaker:
Prof. Marek Perkowski
Department of Electrical and Computer Engineering
Portland State University
Oregon, USA.


Title: DECOMPOSITION OF MULTIPLE-VALUED RELATIONS FOR DATA MINING AND VLSI DESIGN

Abstract:


Several approaches to Machine Learning (ML) based on using logic methods have been developed, usually using two-level AND/OR networks or decision trees and diagrams.


This lecture presents a new constructive induction approach to ML, based on the functional decomposition that has been developed at Portland State University in collaboration with Wright Labs and ABTECH Corporation.


In our case, we decompose multi-valued (MV) relations, for reduction of error, smallest complexity and best "explainability" of the induced concepts.


Our decomposition is non-disjoint and multi-level. A fundamental difference in decomposition of MV functions and MV relations is discussed: the column (cofactor) pair compatibility translates to the group compatibility for functions, but not for relations. This makes the decomposition of relations more difficult.


Efficient function representation is very important for speed and memory requirements of multiple-valued decomposers. We present a new representation of multiple-valued relations (functions in particular), called Labeled Rough Partitions (LRP). LRPs improve on Rough Partition representation by labeling their blocks with variable values and by representing blocks efficiently.


The LRP representation is especially efficient for very strongly unspecified multiple-valued input, multiple-valued output functions and relations, typical for Machine Learning applications. Our approach is efficient and allows to learn large examples. We give comparison with previous approaches on many examples from ML, FPGA, and VLSI circuit design benchmarks.


We give comparison with previous approaches on many examples from ML, FPGA, and VLSI circuit design benchmarks.


Date:
Thursdays.
Time:
4 P.M.
Place:
SEAS ANNEX ROOM 54.


This is the second part of the series, and I will concentrate on Machine Learning and Knowledge Discovery, as well as comparison with other systems. Next, approach based on multi-valued logic will be presented.