Library Open Repository
Gradient Descent Style Leveraging of Decision Trees and Stumps for Misclassification Cost Performance
Cameron-Jones, RM (2001) Gradient Descent Style Leveraging of Decision Trees and Stumps for Misclassification Cost Performance. In: AI 2001: Advances in Artificial Intelligence, 14th Australian Joint Conference on Artificial Intelligence, 10-14 Dec 2001, Adelaide, Australia.
OzAI2001.pdf | Download (434kB)
Available under University of Tasmania Standard License.
This paper investigates the use, for the task of classifier learning in the presence of misclassification costs, of some gradient descent style leveraging approaches to classifier learning: Schapire and Singer's AdaBoost.MH and AdaBoost.MR , and Collins et al's multi-class logistic regression method , and some modifications that retain the gradient descent style approach. Decision trees and stumps are used as the underlying base classifiers, learned from modified versions of Quinlan's C4.5 . Experiments are reported comparing the performance, in terms of average cost, of the modified methods to that of the originals, and to the previously suggested "Cost Boosting" methods of Ting and Zheng  and Ting , which also use decision trees based upon modified C4.5 code, but do not have an interpretation in the gradient descent framework. While some of the modifications improve upon the originals in terms of cost performance for both trees and stumps, the comparison with tree-based Cost Boosting suggests that out of the methods first experimented with here, it is one based on stumps that has the most promise.
|Item Type:||Conference or Workshop Item (Paper)|
|Additional Information:||The original publication is available at www.springerlink.com|
|Date Deposited:||19 Mar 2007|
|Last Modified:||18 Nov 2014 03:13|
|Item Statistics:||View statistics for this item|
Repository Staff Only (login required)
|Item Control Page|