The Influence of Dimensions on the Complexity of Computing Decision Trees

A decision tree recursively splits a feature space d and then assigns class labels based on the resulting partition. Decision trees have been part of the basic machine-learning toolkit for decades. A large body of work treats heuristic algorithms to compute a decision tree from training data, usually minimizing in particular the size of the resulting tree. In contrast, little is known about the complexity of the underlying computational problem of computing a minimum-size tree for the given training data. We study this problem with respect to the number d of dimensions of the feature space. We show that it can be solved in O(n2d + 1) time, but under reasonable complexity-theoretic assumptions it is not possible to achieve f(d)⋅no(d/logd) running time, where n is the number of training examples. The problem is solvable in (dR)O(dR)⋅n1 + o(1) time if there are exactly two classes and R is the upper bound on the number of tree leaves labeled with the smallest class.

keywords: Artificial Intelligence, Computational Geometry, Data Structures, Fixed-Parameter Tractability

Workshop or Poster (weakly reviewed)

Fabrizio Montecchiani, Ignaz Rutter, Jules Wulms, Maarten Löffler, Manuel Sorge, Marcin Pilipczuk, Raimund Seidel, Stephen G. Kobourov
The Influence of Dimensions on the Complexity of Computing Decision Trees
CGWEEK: 6th Workshop on Geometry and Machine Learning
, 2022
https://www.cs.utah.edu/~jeffp/WaGoML/index.html

Archived Publication (not reviewed)

Fabrizio Montecchiani, Ignaz Rutter, Jules Wulms, Maarten Löffler, Manuel Sorge, Marcin Pilipczuk, Raimund Seidel, Stephen G. Kobourov
The Influence of Dimensions on the Complexity of Computing Decision Trees
2205.07756, 2022
http://arXiv.org/abs/2205.07756

back to list