Generates for each run, carries out an n-fold cross-validation, using the set SplitEvaluator to generate some results. If the class attribute is nominal, the dataset is stratified. Results for each fold are generated, so you may wish to use this in addition with an AveragingResultProducer to obtain averages for each run.
Valid options are:
-X <number of folds> The number of folds to use for the cross-validation. (default 10)
-D Save raw split evaluator output.
-O <file/directory name/path> The filename where raw output will be stored. If a directory name is specified then then individual outputs will be gzipped, otherwise all output will be zipped to the named file. Use in conjuction with -D. (default splitEvalutorOut.zip)
-W <class name> The full class name of a SplitEvaluator. eg: weka.experiment.ClassifierSplitEvaluator
Options specific to split evaluator weka.experiment.ClassifierSplitEvaluator:
-W <class name> The full class name of the classifier. eg: weka.classifiers.bayes.NaiveBayes
-C <index> The index of the class for which IR statistics are to be output. (default 1)
-I <index> The index of an attribute to output in the results. This attribute should identify an instance in order to know which instances are in the test set of a cross validation. if 0 no output (default 0).
-P Add target and prediction columns to the result for each fold.
Options specific to classifier weka.classifiers.rules.ZeroR:
-D If set, classifier is run in debug mode and may output additional info to the console
All options after -- will be passed to the split evaluator.
@author Len Trigg (trigg@cs.waikato.ac.nz)
@version $Revision: 1.17 $