Browsing by Author "Oktar Y."
Now showing 1 - 2 of 2
- Results Per Page
- Sort Options
Conference Object Classification Via Simplicial Learning(IEEE Computer Society, 2020) Oktar Y.; Türkan, MehmetDictionary learning for sparse representations is generative in nature, hence discriminative modifications are commonly observed for classification problems. Classical dictionary learning bears a fundamental problem of not being capable of distinguishing two different classes lying on the same subspace, that cannot be resolved by any discriminative modification. This paper proposes an evolutionary simplicial learning method as a generative and compact sparse framework that solves the aforementioned problem for classification. Simplicial learning is an adaptation of conventional dictionary learning, in which subspaces designated by dictionary elements take the form of simplices through additional constraints on sparse codes. On top, an evolutionary approach is developed to determine the dimensionality and the number of simplices composing the simplicial. The proposed evolutionary learning is considered within multi-class classification tasks through synthetic and handwritten digit datasets and the superiority of it even as a generative-only approach is demonstrated. Simplicial learning loses its superiority over discriminative methods in high-dimensional real-world cases but can further be modified with discriminative elements to achieve state-of-the-art for classification. © 2020 IEEE.Conference Object Citation - WoS: 1Citation - Scopus: 1Dictionary Learning With Residual Codes(Institute of Electrical and Electronics Engineers Inc., 2017) Oktar Y.; Türkan, MehmetIn conventional sparse representations based dictionary learning algorithms, initial dictionaries are generally assumed to be proper representatives of the system at hand. However, this may not be the case, especially in some systems restricted to random initialization. Therefore, a supposedly optimal state-update based on such an improper model might lead to undesired effects that will be conveyed to successive learning iterations. In this paper, we propose a dictionary learning method which includes a general error-correction process that codes the residual left over from a less intensive initial learning attempt and then adjusts the sparse codes accordingly. Experimental observations show that such additional step vastly improves rates of convergence in high-dimensional cases, also results in better converged states in the case of random initialization. Improvements also scale up with more lenient sparsity constraints. © 2017 IEEE.
