Name : perl-AI-DecisionTree
| |
Version : 0.11
| Vendor : obs://build_opensuse_org/devel:languages:perl
|
Release : lp155.7.1
| Date : 2023-07-20 15:26:06
|
Group : Development/Libraries/Perl
| Source RPM : perl-AI-DecisionTree-0.11-lp155.7.1.src.rpm
|
Size : 0.13 MB
| |
Packager : https://www_suse_com/
| |
Summary : Automatically Learns Decision Trees
|
Description :
The \'AI::DecisionTree\' module automatically creates so-called \"decision trees\" to explain a set of training data. A decision tree is a kind of categorizer that use a flowchart-like process for categorizing new instances. For instance, a learned decision tree might look like the following, which classifies for the concept \"play tennis\":
OUTLOOK / | \\ / | \\ / | \\ sunny/ overcast \\rainy / | \\ HUMIDITY | WIND / \\ *no* / \\ / \\ / \\ high/ \ ormal / \\ / \\ strong/ \\weak *no* *yes* / \\ *no* *yes*
(This example, and the inspiration for the \'AI::DecisionTree\' module, come directly from Tom Mitchell\'s excellent book \"Machine Learning\", available from McGraw Hill.)
A decision tree like this one can be learned from training data, and then applied to previously unseen data to obtain results that are consistent with the training data.
The usual goal of a decision tree is to somehow encapsulate the training data in the smallest possible tree. This is motivated by an \"Occam\'s Razor\" philosophy, in which the simplest possible explanation for a set of phenomena should be preferred over other explanations. Also, small trees will make decisions faster than large trees, and they are much easier for a human to look at and understand. One of the biggest reasons for using a decision tree instead of many other machine learning techniques is that a decision tree is a much more scrutable decision maker than, say, a neural network.
The current implementation of this module uses an extremely simple method for creating the decision tree based on the training instances. It uses an Information Gain metric (based on expected reduction in entropy) to select the \"most informative\" attribute at each node in the tree. This is essentially the ID3 algorithm, developed by J. R. Quinlan in 1986. The idea is that the attribute with the highest Information Gain will (probably) be the best attribute to split the tree on at each point if we\'re interested in making small trees.
|
RPM found in directory: /packages/linux-pbone/ftp5.gwdg.de/pub/opensuse/repositories/devel:/languages:/perl:/CPAN-A/15.5/x86_64 |