University of Tasmania
Browse
whole_WaughSamuelGeorge1997_thesis.pdf (13.52 MB)

Extending and benchmarking Cascade-Correlation : extensions to the Cascade-Correlation architecture and benchmarking of feed-forward supervised artificial neural networks

Download (13.52 MB)
thesis
posted on 2023-05-27, 15:15 authored by Waugh, SG
This thesis is divided into two parts: the first examines various extensions to Cascade-Correlation, and the second examines the benchmarking of feed-forward supervised artificial neural networks, including back-propagation and Cascade-Correlation. The first extensions to the training mechanism of Cascade-Correlation involve the inclusion of patience to stop the addition of hidden nodes and the introduction of alternative methods for training the candidate pool. These methods greatly improve the training speed of the algorithm. Secondly, reducing the number of connections within Cascade-Correlation networks is examined: by the introduction of hidden nodes with limited connection strategies, and by the pruning of the fully-connected hidden nodes and the output layer. Three methods of stopping the pruning process are briefly investigated. It is shown that adding limited connected hidden nodes is effective in altering the style of network topology, if not reducing the number of connections. Pruning within Cascade-Correlation drastically reduces the number of connections required without affecting the classification performance of the networks developed. Furthermore, all the different methods of halting the pruning process are shown to be effective. The second part of the thesis concentrates on benchmarking feed-forward supervised artificial neural networks, in particular Cascade-Correlation. The earlier part of the thesis highlights the need for effective benchmarks, as a large number of real-world problems do not require anything more than a single layer of weights to achieve near optimal performance given the available data. The second part initially investigates two new real-world problems. Although both turn out to be useful problems to examine ‚ÄövÑvÆ testing many of the features of Cascade-Correlation described earlier ‚ÄövÑvÆ they too do not require much more than a single layer of weights, and hence do not test the power of Cascade-Correlation or other systems which allow the use of hidden nodes. Two methods of generating artificial data are then examined as ways of producing increasingly complex data sets. The application of these benchmarks to the comparison of various artificial neural network methods is examined. The generated data sets are effective in highlighting the differences between the algorithms, for example it is shown that Quickprop and the activation function offset methods of accelerating training are not always useful, and provide more detailed results on the various Cascade-Correlation modifications.

History

Publication status

  • Unpublished

Rights statement

Copyright 1995 the Author - The University is continuing to endeavour to trace the copyright owner(s) and in the meantime this item has been reproduced here in good faith. We would be pleased to hear from the copyright owner(s). Thesis (Ph.D.)--University of Tasmania, 1997. Includes bibliographical references. In two parts. The first examines extensions to the training mechanism of Cascade-Correlation, and reducing the number of connections by adding hidden nodes and halting the pruning process. The second part concentrates on benchmarks

Repository Status

  • Open

Usage metrics

    Thesis collection

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC