Browsing by Author "Iosifidis, Alexandros"
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
Article Citation - WoS: 5Citation - Scopus: 5The Effect of Automated Taxa Identification Errors on Biological Indices(Pergamon-Elsevier Science Ltd, 2017-04) Arje, Johanna; Karkkainen, Salme; Meissner, Kristian; Iosifidis, Alexandros; İnce, Türker; Gabbouj, Moncef; Kiranyaz, SerkanIn benthic macroinvertebrate biomonitoring systems, the target is to determine the status of ecosystems based on several biological indices. To increase cost-efficiency, computer-based taxa identification for image data has recently been developed. Taxa identification errors can, however, have strong effects on the indices and thus on the determination of the ecological status. In order to shift the biomonitoring process towards automated expert systems, we need a clear understanding on the bias caused by automation. In this paper, we examine eleven classification methods in the case of macroinvertebrate image data and show how their classification errors propagate into different biological indices. We evaluate 14 richness, diversity, dominance and similarity indices commonly used in biomonitoring. Besides the error rate of the classification method, we discuss the potential effect of different types of identification errors. Finally, we provide recommendations on indices that are least affected by the automatic identification errors and could be used in automated biomonitoring. (C) 2016 Elsevier Ltd. All rights reserved.Article Citation - WoS: 17Citation - Scopus: 18Exploiting Heterogeneity in Operational Neural Networks by Synaptic Plasticity(Springer London Ltd, 2021-01-04) Kiranyaz, Serkan; Malik, Junaid; Abdallah, Habib Ben; İnce, Türker; Iosifidis, Alexandros; Gabbouj, MoncefThe recently proposed network model, Operational Neural Networks (ONNs), can generalize the conventional Convolutional Neural Networks (CNNs) that are homogenous only with a linear neuron model. As a heterogenous network model, ONNs are based on a generalized neuron model that can encapsulate any set of non-linear operators to boost diversity and to learn highly complex and multi-modal functions or spaces with minimal network complexity and training data. However, the default search method to find optimal operators in ONNs, the so-called Greedy Iterative Search (GIS) method, usually takes several training sessions to find a single operator set per layer. This is not only computationally demanding, also the network heterogeneity is limited since the same set of operators will then be used for all neurons in each layer. To address this deficiency and exploit a superior level of heterogeneity, in this study the focus is drawn on searching the best-possible operator set(s) for the hidden neurons of the network based on the Synaptic Plasticity paradigm that poses the essential learning theory in biological neurons. During training, each operator set in the library can be evaluated by their synaptic plasticity level, ranked from the worst to the best, and an elite ONN can then be configured using the top-ranked operator sets found at each hidden layer. Experimental results over highly challenging problems demonstrate that the elite ONNs even with few neurons and layers can achieve a superior learning performance than GIS-based ONNs and as a result, the performance gap over the CNNs further widens.Article Citation - WoS: 72Citation - Scopus: 89Operational neural networks(Springer London Ltd, 2020-03-06) Kiranyaz, Serkan; İnce, Türker; Iosifidis, Alexandros; Gabbouj, MoncefFeed-forward, fully connected artificial neural networks or the so-called multi-layer perceptrons are well-known universal approximators. However, their learning performance varies significantly depending on the function or the solution space that they attempt to approximate. This is mainly because of their homogenous configuration based solely on the linear neuron model. Therefore, while they learn very well those problems with a monotonous, relatively simple and linearly separable solution space, they may entirely fail to do so when the solution space is highly nonlinear and complex. Sharing the same linear neuron model with two additional constraints (local connections and weight sharing), this is also true for the conventional convolutional neural networks (CNNs) and it is, therefore, not surprising that in many challenging problems only the deep CNNs with a massive complexity and depth can achieve the required diversity and the learning performance. In order to address this drawback and also to accomplish a more generalized model over the convolutional neurons, this study proposes a novel network model, called operational neural networks (ONNs), which can be heterogeneous and encapsulate neurons with any set of operators to boost diversity and to learn highly complex and multi-modal functions or spaces with minimal network complexity and training data. Finally, the training method to back-propagate the error through the operational layers of ONNs is formulated. Experimental results over highly challenging problems demonstrate the superior learning capabilities of ONNs even with few neurons and hidden layers.Article Citation - WoS: 41Citation - Scopus: 45Progressive Operational Perceptrons(Elsevier, 2017-02) Kiranyaz, Serkan; İnce, Türker; Iosifidis, Alexandros; Gabbouj, MoncefThere are well-known limitations and drawbacks on the performance and robustness of the feed-forward, fully connected Artificial Neural Networks (ANNs), or the so-called Multi-Layer Perceptrons (MLPs). In this study we shall address them by Generalized Operational Perceptrons (GOPs) that consist of neurons with distinct (non-) linear operators to achieve a generalized model of the biological neurons and ultimately a superior diversity. We modified the conventional back-propagation (BP) to train GOPs and furthermore, proposed Progressive Operational Perceptrons (POPs) to achieve self-organized and depth-adaptive GOPs according to the learning problem. The most crucial property of the POPs is their ability to simultaneously search for the optimal operator set and train each layer individually. The final POP is, therefore, formed layer by layer and in this paper we shall show that this ability enables POPs with minimal network depth to attack the most challenging learning problems that cannot be learned by conventional ANNs even with a deeper and significantly complex configuration. Experimental results show that POPs can scale up very well with the problem size and can have the potential to achieve a superior generalization performance on real benchmark problems with a significant gain.Article Citation - WoS: 64Citation - Scopus: 76Self-Organized Operational Neural Networks With Generative Neurons(Pergamon-Elsevier Science Ltd, 2021-08) Kiranyaz, Serkan; Malik, Junaid; Abdallah, Habib Ben; İnce, Türker; Iosifidis, Alexandros; Gabbouj, MoncefOperational Neural Networks (ONNs) have recently been proposed to address the well-known limitations and drawbacks of conventional Convolutional Neural Networks (CNNs) such as network homogeneity with the sole linear neuron model. ONNs are heterogeneous networks with a generalized neuron model. However the operator search method in ONNs is not only computationally demanding, but the network heterogeneity is also limited since the same set of operators will then be used for all neurons in each layer. Moreover, the performance of ONNs directly depends on the operator set library used, which introduces a certain risk of performance degradation especially when the optimal operator set required for a particular task is missing from the library. In order to address these issues and achieve an ultimate heterogeneity level to boost the network diversity along with computational efficiency, in this study we propose Self-organized ONNs (Self-ONNs) with generative neurons that can adapt (optimize) the nodal operator of each connection during the training process. Moreover, this ability voids the need of having a fixed operator set library and the prior operator search within the library in order to find the best possible set of operators. We further formulate the training method to back-propagate the error through the operational layers of Self-ONNs. Experimental results over four challenging problems demonstrate the superior learning capability and computational efficiency of Self-ONNs over conventional ONNs and CNNs. (C) 2021 The Author(s). Published by Elsevier Ltd.
