Please use this identifier to cite or link to this item:
https://hdl.handle.net/20.500.14365/3601
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Oktar Y. | - |
dc.contributor.author | Turkan M. | - |
dc.date.accessioned | 2023-06-16T15:00:54Z | - |
dc.date.available | 2023-06-16T15:00:54Z | - |
dc.date.issued | 2017 | - |
dc.identifier.isbn | 9.78151E+12 | - |
dc.identifier.uri | https://doi.org/10.1109/SIU.2017.7960168 | - |
dc.identifier.uri | https://hdl.handle.net/20.500.14365/3601 | - |
dc.description | 25th Signal Processing and Communications Applications Conference, SIU 2017 -- 15 May 2017 through 18 May 2017 -- 128703 | en_US |
dc.description.abstract | In conventional sparse representations based dictionary learning algorithms, initial dictionaries are generally assumed to be proper representatives of the system at hand. However, this may not be the case, especially in some systems restricted to random initialization. Therefore, a supposedly optimal state-update based on such an improper model might lead to undesired effects that will be conveyed to successive learning iterations. In this paper, we propose a dictionary learning method which includes a general error-correction process that codes the residual left over from a less intensive initial learning attempt and then adjusts the sparse codes accordingly. Experimental observations show that such additional step vastly improves rates of convergence in high-dimensional cases, also results in better converged states in the case of random initialization. Improvements also scale up with more lenient sparsity constraints. © 2017 IEEE. | en_US |
dc.language.iso | tr | en_US |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | en_US |
dc.relation.ispartof | 2017 25th Signal Processing and Communications Applications Conference, SIU 2017 | en_US |
dc.rights | info:eu-repo/semantics/closedAccess | en_US |
dc.subject | dictionary learning | en_US |
dc.subject | residual codes | en_US |
dc.subject | sparse approximation | en_US |
dc.subject | Sparse coding | en_US |
dc.subject | Codes (symbols) | en_US |
dc.subject | Learning algorithms | en_US |
dc.subject | Signal processing | en_US |
dc.subject | Dictionary learning | en_US |
dc.subject | Dictionary learning algorithms | en_US |
dc.subject | Rates of convergence | en_US |
dc.subject | residual codes | en_US |
dc.subject | Sparse approximations | en_US |
dc.subject | Sparse coding | en_US |
dc.subject | Sparse representation | en_US |
dc.subject | Sparsity constraints | en_US |
dc.subject | Education | en_US |
dc.title | Dictionary learning with residual codes | en_US |
dc.title.alternative | Artik Nicellerle Sözlük Ö?renimi | en_US |
dc.type | Conference Object | en_US |
dc.identifier.doi | 10.1109/SIU.2017.7960168 | - |
dc.identifier.scopus | 2-s2.0-85026326077 | en_US |
dc.authorscopusid | 56560191100 | - |
dc.identifier.wos | WOS:000413813100032 | en_US |
dc.relation.publicationcategory | Konferans Öğesi - Uluslararası - Kurum Öğretim Elemanı | en_US |
dc.identifier.scopusquality | N/A | - |
dc.identifier.wosquality | N/A | - |
item.grantfulltext | reserved | - |
item.openairetype | Conference Object | - |
item.openairecristype | http://purl.org/coar/resource_type/c_18cf | - |
item.fulltext | With Fulltext | - |
item.languageiso639-1 | tr | - |
item.cerifentitytype | Publications | - |
crisitem.author.dept | 05.10. Mechanical Engineering | - |
Appears in Collections: | Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collection |
Files in This Item:
File | Size | Format | |
---|---|---|---|
2690.pdf Restricted Access | 257 kB | Adobe PDF | View/Open Request a copy |
CORE Recommender
SCOPUSTM
Citations
1
checked on Nov 20, 2024
WEB OF SCIENCETM
Citations
1
checked on Nov 20, 2024
Page view(s)
68
checked on Nov 18, 2024
Download(s)
6
checked on Nov 18, 2024
Google ScholarTM
Check
Altmetric
Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.