Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.14365/1213
Title: Evaluation of global and local training techniques over feed-forward neural network architecture spaces for computer-aided medical diagnosis
Authors: İnce, Türker
Kiranyaz, Serkan
Pulkkinen, Jenni
Gabbouj, Moncef
Keywords: Artificial neural networks
Backpropagation
Particle swarm optimization
Decision-Making
Publisher: Pergamon-Elsevier Science Ltd
Abstract: In this paper, we investigate the performance of global vs. local techniques applied to the training of neural network classifiers for solving medical diagnosis problems. The presented methodology of the investigation involves systematic and exhaustive evaluation of the classifier performance over a neural network architecture space and with respect to training depth for a particular problem. In this study, the architecture space is defined over feed-forward, fully-connected artificial neural networks (ANNs) which have been widely used in computer-aided decision support systems in medical domain, and for which two popular neural network training methods are explored: conventional backpropagation (BP) and particle swarm optimization (PSO). Both training techniques are compared in terms of classification performance over three medical diagnosis problems (breast cancer, heart disease, and diabetes) from Pro-ben1 benchmark dataset and computational and architectural analysis are performed for an extensive assessment. The results clearly demonstrate that it is not possible to compare and evaluate the performance of the two algorithms over a single network and with a fixed set of training parameters, as most of the earlier work in this field has been carried out, since training and test classification performances vary significantly and depend directly on the network architecture, the training depth and method used and the available dataset. We, therefore, show that an extensive evaluation method such as the one proposed in this paper is basically needed to obtain a reliable and detailed performance assessment, in that, we can conclude that the PSO algorithm has usually a better generalization ability across the architecture space whereas BP can occasionally provide better training and/or test classification performance for some network configurations. Furthermore, we can in general say that the PSO, as a global training algorithm, is capable of achieving minimum test classification errors regardless of the training depth, i.e. shallow or deep, and its average classification performance shows less variations with respect to network architecture. In terms of computational complexity, BP is in general superior to PSO for the entire architecture space used. (C) 2010 Elsevier Ltd. All rights reserved.
URI: https://doi.org/10.1016/j.eswa.2010.05.033
https://hdl.handle.net/20.500.14365/1213
ISSN: 0957-4174
1873-6793
Appears in Collections:Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collection

Files in This Item:
File SizeFormat 
236.pdf
  Restricted Access
700.57 kBAdobe PDFView/Open    Request a copy
Show full item record



CORE Recommender

SCOPUSTM   
Citations

37
checked on Oct 9, 2024

WEB OF SCIENCETM
Citations

29
checked on Oct 9, 2024

Page view(s)

88
checked on Oct 7, 2024

Download(s)

2
checked on Oct 7, 2024

Google ScholarTM

Check




Altmetric


Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.