Browsing by Author "Ahishali, Mete"
Now showing 1 - 5 of 5
- Results Per Page
- Sort Options
Conference Object Citation - Scopus: 3Audio-Based Anomaly Detection in Industrial Machines Using Deep One-Class Support Vector Data Description(IEEE, 2025) Kilickaya, Sertac; Ahishali, Mete; Celebioglu, Cansu; Sohrab, Fahad; Eren, Levent; Ince, Turker; Gabbouj, MoncefThe frequent breakdowns and malfunctions of industrial equipment have driven increasing interest in utilizing cost-effective and easy-to-deploy sensors, such as microphones, for effective condition monitoring of machinery. Microphones offer a low-cost alternative to widely used condition monitoring sensors with their high bandwidth and capability to detect subtle anomalies that other sensors might have less sensitivity. In this study, we investigate malfunctioning industrial machines to evaluate and compare anomaly detection performance across different machine types and fault conditions. Log-Mel spectrograms of machinery sound are used as input, and the performance is evaluated using the area under the curve (AUC) score for two different methods: baseline dense autoencoder (AE) and oneclass deep Support Vector Data Description (deep SVDD) with different subspace dimensions. Our results over the MIMII sound dataset demonstrate that the deep SVDD method with a subspace dimension of 2 provides superior anomaly detection performance, achieving average AUC scores of 0.84, 0.80, and 0.69 for 6 dB, 0 dB, and -6 dB signal-to-noise ratios (SNRs), respectively, compared to 0.82, 0.72, and 0.64 for the baseline model. Moreover, deep SVDD requires 7.4 times fewer trainable parameters than the baseline dense AE, emphasizing its advantage in both effectiveness and computational efficiency.Article Citation - WoS: 22Citation - Scopus: 26Classification of Polarimetric Sar Images Using Compact Convolutional Neural Networks(Taylor & Francis Ltd, 2021) Ahishali, Mete; Kiranyaz, Serkan; İnce, Türker; Gabbouj, MoncefClassification of polarimetric synthetic aperture radar (PolSAR) images is an active research area with a major role in environmental applications. The traditional Machine Learning (ML) methods proposed in this domain generally focus on utilizing highly discriminative features to improve the classification performance, but this task is complicated by the well-known curse of dimensionality phenomena. Other approaches based on deep Convolutional Neural Networks (CNNs) have certain limitations and drawbacks, such as high computational complexity, an unfeasibly large training set with ground-truth labels, and special hardware requirements. In this work, to address the limitations of traditional ML and deep CNN-based methods, a novel and systematic classification framework is proposed for the classification of PolSAR images, based on a compact and adaptive implementation of CNNs using a sliding-window classification approach. The proposed approach has three advantages. First, there is no requirement for an extensive feature extraction process. Second, it is computationally efficient due to utilized compact configurations. In particular, the proposed compact and adaptive CNN model is designed to achieve the maximum classification accuracy with minimum training and computational complexity. This is of considerable importance considering the high costs involved in labeling in PolSAR classification. Finally, the proposed approach can perform classification using smaller window sizes than deep CNNs. Experimental evaluations have been performed over the most commonly used four benchmark PolSAR images: AIRSAR L-Band and RADARSAT-2 C-Band data of San Francisco Bay and Flevoland areas. Accordingly, the best obtained overall accuracies range between 92.33-99.39% for these benchmark study sites.Conference Object Citation - WoS: 5Citation - Scopus: 7Comparison of Polarimetric Sar Features for Terrain Classification Using Incremental Training(IEEE, 2017) İnce, Türker; Ahishali, Mete; Kiranyaz, SerkanIn this study, the most commonly used polarimetric SAR features including the complete coherency (or covariance) matrix information, features obtained from several coherent and incoherent target decompositions, the backscattering power and the visual texture features are compared in terms of their classification performance of different terrain classes. For pattern recognition, two powerful machine learning techniques, Collective Network of Binary Classifier (CNBC) with incremental training capability and Support Vector Machines (SVM) are employed. Each feature has its own strength and weaknesses for discriminating different SAR class types and this study aims to investigate them through incremental feature based training of both classifiers and compare the results of the experiments performed using the fully polarimetric San Francisco Bay and Flevoland datasets.Article Citation - WoS: 19Citation - Scopus: 24Dual and Single Polarized Sar Image Classification Using Compact Convolutional Neural Networks(Mdpi, 2019) Ahishali, Mete; Kiranyaz, Serkan; İnce, Türker; Gabbouj, MoncefAccurate land use/land cover classification of synthetic aperture radar (SAR) images plays an important role in environmental, economic, and nature related research areas and applications. When fully polarimetric SAR data is not available, single- or dual-polarization SAR data can also be used whilst posing certain difficulties. For instance, traditional Machine Learning (ML) methods generally focus on finding more discriminative features to overcome the lack of information due to single- or dual-polarimetry. Beside conventional ML approaches, studies proposing deep convolutional neural networks (CNNs) come with limitations and drawbacks such as requirements of massive amounts of data for training and special hardware for implementing complex deep networks. In this study, we propose a systematic approach based on sliding-window classification with compact and adaptive CNNs that can overcome such drawbacks whilst achieving state-of-the-art performance levels for land use/land cover classification. The proposed approach voids the need for feature extraction and selection processes entirely, and perform classification directly over SAR intensity data. Furthermore, unlike deep CNNs, the proposed approach requires neither a dedicated hardware nor a large amount of data with ground-truth labels. The proposed systematic approach is designed to achieve maximum classification accuracy on single and dual-polarized intensity data with minimum human interaction. Moreover, due to its compact configuration, the proposed approach can process such small patches which is not possible with deep learning solutions. This ability significantly improves the details in segmentation masks. An extensive set of experiments over two benchmark SAR datasets confirms the superior classification performance and efficient computational complexity of the proposed approach compared to the competing methods.Conference Object Citation - WoS: 1Citation - Scopus: 3Performance Comparison of Learned Vs. Engineered Features for Polarimetric Sar Terrain Classification(IEEE, 2019) Ahishali, Mete; İnce, Türker; Kiranyaz, Serkan; Gabbouj, MoncefIn this work, we propose to use learned features for terrain classification of Polarimetric Synthetic Aperture Radar (PolSAR) images. In the proposed classification framework, the learned features are extracted from sliding window regions using Convolutional Neural Networks (CNNs), and then they are used for the classification with the linear Support Vector Machine (SVM) classifier. The classification performance of the proposed approach is compared with numerous target decomposition theorems (TDs) as the engineered features tested with two classifiers: Collective Network of Binary Classifiers (CNBCs) and SVMs. The experimental evaluations over two commonly used benchmark AIRSAR PolSAR images, San Francisco Bay and Flevoland at L-Band, reveal that the classification performance of the learned features with CNNs outperforms the performance of the engineered features as TDs even the dimension of learned features is the quarter of the engineered features.

