Open Access Repository

Extending and benchmarking Cascade-Correlation : extensions to the Cascade-Correlation architecture and benchmarking of feed-forward supervised artificial neural networks

Downloads

Downloads per month over past year

Waugh, SG (1995) Extending and benchmarking Cascade-Correlation : extensions to the Cascade-Correlation architecture and benchmarking of feed-forward supervised artificial neural networks. PhD thesis, University of Tasmania.

[img]
Preview
PDF (Whole thesis)
whole_WaughSamu...pdf | Download (14MB)
Available under University of Tasmania Standard License.

| Preview

Abstract

This thesis is divided into two parts: the first examines various extensions to Cascade-Correlation, and the second examines the benchmarking of feed-forward supervised artificial neural networks, including back-propagation and Cascade-Correlation.
The first extensions to the training mechanism of Cascade-Correlation involve the inclusion of patience to stop the addition of hidden nodes and the introduction of alternative methods for training the candidate pool. These methods greatly improve the training speed of the algorithm. Secondly, reducing the number of connections within Cascade-Correlation networks is examined: by the introduction of hidden nodes with limited connection strategies, and by the pruning of the fully-connected hidden nodes and the output layer. Three methods of stopping the pruning process are briefly investigated. It is shown that adding limited connected hidden nodes is effective in altering the style of network topology, if not reducing the number of connections. Pruning within Cascade-Correlation drastically reduces the number of connections required without affecting the classification performance of the networks developed. Furthermore, all the different methods of halting the pruning process are shown to be effective.
The second part of the thesis concentrates on benchmarking feed-forward supervised artificial neural networks, in particular Cascade-Correlation. The earlier part of the thesis highlights the need for effective benchmarks, as a large number of real-world problems do not require anything more than a single layer of weights to achieve near optimal performance given the available data. The second part initially investigates two new real-world problems. Although both turn out to be useful problems to examine — testing many of the features of Cascade-Correlation described earlier — they too do not require much more than a single layer of weights, and hence do not test the power of Cascade-Correlation or other systems which allow the use of hidden nodes. Two methods of generating artificial data are then examined as ways of producing increasingly complex data sets. The application of these benchmarks to the comparison of various artificial neural network methods is examined. The generated data sets are effective in highlighting the differences between the algorithms, for example it is shown that Quickprop and the activation function offset methods of accelerating training are not always useful, and provide more detailed results on the various Cascade-Correlation modifications.

Item Type: Thesis (PhD)
Copyright Holders: The Author
Copyright Information:

Copyright 1995 the Author - The University is continuing to endeavour to trace the copyright
owner(s) and in the meantime this item has been reproduced here in good faith. We
would be pleased to hear from the copyright owner(s).

Additional Information:

Thesis (Ph.D.)--University of Tasmania, 1997. Includes bibliographical references. In two parts. The first examines extensions to the training mechanism of Cascade-Correlation, and reducing the number of connections by adding hidden nodes and halting the pruning process. The second part concentrates on benchmarks

Date Deposited: 04 Feb 2015 23:23
Last Modified: 11 Mar 2016 05:55
Item Statistics: View statistics for this item

Actions (login required)

Item Control Page Item Control Page
TOP