Repository logoGCRIS
  • English
  • Türkçe
  • Русский
Log In
New user? Click here to register. Have you forgotten your password?
Home
Communities
Browse GCRIS
Entities
Overview
GCRIS Guide
  1. Home
  2. Browse by Author

Browsing by Author "Oktar Y."

Filter results by typing the first few letters
Now showing 1 - 2 of 2
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Conference Object
    Classification Via Simplicial Learning
    (IEEE Computer Society, 2020) Oktar Y.; Türkan, Mehmet
    Dictionary learning for sparse representations is generative in nature, hence discriminative modifications are commonly observed for classification problems. Classical dictionary learning bears a fundamental problem of not being capable of distinguishing two different classes lying on the same subspace, that cannot be resolved by any discriminative modification. This paper proposes an evolutionary simplicial learning method as a generative and compact sparse framework that solves the aforementioned problem for classification. Simplicial learning is an adaptation of conventional dictionary learning, in which subspaces designated by dictionary elements take the form of simplices through additional constraints on sparse codes. On top, an evolutionary approach is developed to determine the dimensionality and the number of simplices composing the simplicial. The proposed evolutionary learning is considered within multi-class classification tasks through synthetic and handwritten digit datasets and the superiority of it even as a generative-only approach is demonstrated. Simplicial learning loses its superiority over discriminative methods in high-dimensional real-world cases but can further be modified with discriminative elements to achieve state-of-the-art for classification. © 2020 IEEE.
  • Loading...
    Thumbnail Image
    Conference Object
    Citation - WoS: 1
    Citation - Scopus: 1
    Dictionary Learning With Residual Codes
    (Institute of Electrical and Electronics Engineers Inc., 2017) Oktar Y.; Türkan, Mehmet
    In conventional sparse representations based dictionary learning algorithms, initial dictionaries are generally assumed to be proper representatives of the system at hand. However, this may not be the case, especially in some systems restricted to random initialization. Therefore, a supposedly optimal state-update based on such an improper model might lead to undesired effects that will be conveyed to successive learning iterations. In this paper, we propose a dictionary learning method which includes a general error-correction process that codes the residual left over from a less intensive initial learning attempt and then adjusts the sparse codes accordingly. Experimental observations show that such additional step vastly improves rates of convergence in high-dimensional cases, also results in better converged states in the case of random initialization. Improvements also scale up with more lenient sparsity constraints. © 2017 IEEE.
Repository logo
Collections
  • Scopus Collection
  • WoS Collection
  • TrDizin Collection
  • PubMed Collection
Entities
  • Research Outputs
  • Organizations
  • Researchers
  • Projects
  • Awards
  • Equipments
  • Events
About
  • Contact
  • GCRIS
  • Research Ecosystems
  • Feedback
  • OAI-PMH

Log in to GCRIS Dashboard

GCRIS Mobile

Download GCRIS Mobile on the App StoreGet GCRIS Mobile on Google Play

Powered by Research Ecosystems

  • Privacy policy
  • End User Agreement
  • Feedback