This Bayes Network learning algorithm uses a hill climbing algorithm restricted by an order on the variables.
For more information see:
G.F. Cooper, E. Herskovits (1990). A Bayesian method for constructing Bayesian belief networks from databases.
G. Cooper, E. Herskovits (1992). A Bayesian method for the induction of probabilistic networks from data. Machine Learning. 9(4):309-347.
Works with nominal variables and no missing values only.
BibTeX:
@proceedings{Cooper1990, author = {G.F. Cooper and E. Herskovits}, booktitle = {Proceedings of the Conference on Uncertainty in AI}, pages = {86-94}, title = {A Bayesian method for constructing Bayesian belief networks from databases}, year = {1990} } @article{Cooper1992, author = {G. Cooper and E. Herskovits}, journal = {Machine Learning}, number = {4}, pages = {309-347}, title = {A Bayesian method for the induction of probabilistic networks from data}, volume = {9}, year = {1992} }
Valid options are:
-N Initial structure is empty (instead of Naive Bayes)
-P <nr of parents> Maximum number of parents
-R Random order. (default false)
-mbc Applies a Markov Blanket correction to the network structure, after a network structure is learned. This ensures that all nodes in the network are part of the Markov blanket of the classifier node.
-S [LOO-CV|k-Fold-CV|Cumulative-CV] Score type (LOO-CV,k-Fold-CV,Cumulative-CV)
-Q Use probabilistic or 0/1 scoring. (default probabilistic scoring)
@author Remco Bouckaert (rrb@xm.co.nz)
@version $Revision: 1.8 $