Repository logoGCRIS
  • English
  • Türkçe
  • Русский
Log In
New user? Click here to register. Have you forgotten your password?
Home
Communities
Browse GCRIS
Entities
Overview
GCRIS Guide
  1. Home
  2. Browse by Author

Browsing by Author "Oktar, Yigit"

Filter results by typing the first few letters
Now showing 1 - 4 of 4
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 2
    Citation - Scopus: 2
    Evolutionary Simplicial Learning as a Generative and Compact Sparse Framework for Classification
    (Elsevier, 2020) Oktar, Yigit; Turkan, Mehmet
    Dictionary learning for sparse representations has been successful in many reconstruction tasks. Simplicial learning is an adaptation of dictionary learning, where subspaces become clipped and acquire arbitrary offsets, taking the form of simplices. Such adaptation is achieved through additional constraints on sparse codes. Furthermore, an evolutionary approach can be chosen to determine the number and the dimensionality of simplices composing the simplicial, in which most generative and compact simplicials are favored. This paper proposes an evolutionary simplicial learning method as a generative and compact sparse framework for classification. The proposed approach is first applied on a one-class classification task and it appears as the most reliable method within the considered benchmark. Most surprising results are observed when evolutionary simplicial learning is considered within a multi-class classification task. Since sparse representations are generative in nature, they bear a fundamental problem of not being capable of distinguishing two classes lying on the same subspace. This claim is validated through synthetic experiments and superiority of simplicial learning even as a generative-only approach is demonstrated. Simplicial learning loses its superiority over discriminative methods in high-dimensional cases but can further be modified with discriminative elements to achieve state-of-the-art performance in classification tasks. (C) 2020 Elsevier B.V. All rights reserved.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 4
    Citation - Scopus: 4
    K-Polytopes: a Superproblem of K-Means
    (Springer London Ltd, 2019) Oktar, Yigit; Turkan, Mehmet
    It has already been proven that under certain circumstances dictionary learning for sparse representations is equivalent to conventional k-means clustering. Through additional modifications on sparse representations, it is possible to generalize the notion of centroids to higher orders. In a related algorithm which is called k-flats, q-dimensional flats have been considered as alternative central prototypes. In the proposed formulation of this paper, central prototypes are instead simplexes or even more general polytopes. Using higher-dimensional, nonconvex prototypes may alleviate the curse of dimensionality while also enabling to model nonlinearly distributed datasets successfully. The proposed framework in this study can further be applied in supervised settings flexibly through one-class learning and also in other nonlinear frameworks through kernels.
  • Loading...
    Thumbnail Image
    Article
    Citation - WoS: 2
    Citation - Scopus: 2
    Preserving Spatio-Temporal Information in Machine Learning: a Shift-Invariant K-Means Perspective
    (Springer, 2022) Oktar, Yigit; Turkan, Mehmet
    In conventional machine learning applications, each data attribute is assumed to be orthogonal to others. Namely, every pair of dimension is orthogonal to each other and thus there is no distinction of in-between relations of dimensions. However, this is certainly not the case in real world signals which naturally originate from a spatio-temporal configuration. As a result, the conventional vectorization process disrupts all of the spatio-temporal information about the order/place of data whether it be 1D, 2D, 3D, or 4D. In this paper, the problem of orthogonality is first investigated through conventional k-means of images, where images are to be processed as vectors. As a solution, shift-invariant k-means is proposed in a novel framework with the help of sparse representations. A generalization of shift-invariant k-means, convolutional dictionary learning is then utilized as an unsupervised feature extraction method for classification. Experiments suggest that Gabor feature extraction as a simulation of shallow convolutional neural networks provides a little better performance compared to convolutional dictionary learning. Other alternatives of convolutional-logic are also discussed for spatio-temporal information preservation, including a spatio-temporal hypercomplex encoding scheme.
  • Loading...
    Thumbnail Image
    Review Article
    Citation - WoS: 34
    Citation - Scopus: 40
    A Review of Sparsity-Based Clustering Methods
    (Elsevier, 2018) Oktar, Yigit; Turkan, Mehmet
    In case of high dimensionality, a class of data clustering methods has been proposed as a solution that includes suitable subspace search to find inherent clusters. Sparsity-based clustering approaches include a twist in subspace approach as they incorporate a dimensionality expansion through the usage of an overcomplete dictionary representation. Thus, these approaches provide a broader search space to utilize subspace clustering at large. However, sparsity constraint alone does not enforce structured clusters. Through certain stricter constraints, data grouping is possible, which translates to a type of clustering depending on the types of constraints. The dual of the sparsity constraint, namely the dictionary, is another aspect of the whole sparsity-based clustering methods. Unlike off-the-shelf or fixed-waveform dictionaries, adaptive dictionaries can additionally be utilized to shape the state-model entity into a more adaptive form. Chained with structured sparsity, adaptive dictionaries force the state-model into well-formed clusters. Subspaces designated with structured sparsity can then be dissolved through recursion to acquire deep sparse structures that correspond to a taxonomy. As a final note, such procedure can further be extended to include various other machine learning perspectives. (C) 2018 Elsevier B.V. All rights reserved.
Repository logo
Collections
  • Scopus Collection
  • WoS Collection
  • TrDizin Collection
  • PubMed Collection
Entities
  • Research Outputs
  • Organizations
  • Researchers
  • Projects
  • Awards
  • Equipments
  • Events
About
  • Contact
  • GCRIS
  • Research Ecosystems
  • Feedback
  • OAI-PMH

Log in to GCRIS Dashboard

GCRIS Mobile

Download GCRIS Mobile on the App StoreGet GCRIS Mobile on Google Play

Powered by Research Ecosystems

  • Privacy policy
  • End User Agreement
  • Feedback