TY - JOUR
T1 - Bayesian Network Classifiers
AU - Friedman, Nir
AU - Geiger, Dan
AU - Goldszmidt, Moises
PY - 1997
Y1 - 1997
N2 - Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state-of-the-art classifiers such as C4.5. This fact raises the question of whether a classifier with less restrictive assumptions can perform even better. In this paper we evaluate approaches for inducing classifiers from data, based on the theory of learning Bayesian networks. These networks are factored representations of probability distributions that generalize the naive Bayesian classifier and explicitly represent statements about independence. Among these approaches we single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same time maintains the computational simplicity (no search involved) and robustness that characterize naive Bayes. We experimentally tested these approaches, using problems from the University of California at Irvine repository, and compared them to C4.5, naive Bayes, and wrapper methods for feature selection.
AB - Recent work in supervised learning has shown that a surprisingly simple Bayesian classifier with strong assumptions of independence among features, called naive Bayes, is competitive with state-of-the-art classifiers such as C4.5. This fact raises the question of whether a classifier with less restrictive assumptions can perform even better. In this paper we evaluate approaches for inducing classifiers from data, based on the theory of learning Bayesian networks. These networks are factored representations of probability distributions that generalize the naive Bayesian classifier and explicitly represent statements about independence. Among these approaches we single out a method we call Tree Augmented Naive Bayes (TAN), which outperforms naive Bayes, yet at the same time maintains the computational simplicity (no search involved) and robustness that characterize naive Bayes. We experimentally tested these approaches, using problems from the University of California at Irvine repository, and compared them to C4.5, naive Bayes, and wrapper methods for feature selection.
KW - Bayesian networks
KW - Classification
UR - http://www.scopus.com/inward/record.url?scp=0031276011&partnerID=8YFLogxK
U2 - 10.1023/a:1007465528199
DO - 10.1023/a:1007465528199
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
AN - SCOPUS:0031276011
SN - 0885-6125
VL - 29
SP - 131
EP - 163
JO - Machine Learning
JF - Machine Learning
IS - 2-3
ER -