Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.14365/3601
Full metadata record
DC FieldValueLanguage
dc.contributor.authorOktar Y.-
dc.contributor.authorTurkan M.-
dc.date.accessioned2023-06-16T15:00:54Z-
dc.date.available2023-06-16T15:00:54Z-
dc.date.issued2017-
dc.identifier.isbn9.78151E+12-
dc.identifier.urihttps://doi.org/10.1109/SIU.2017.7960168-
dc.identifier.urihttps://hdl.handle.net/20.500.14365/3601-
dc.description25th Signal Processing and Communications Applications Conference, SIU 2017 -- 15 May 2017 through 18 May 2017 -- 128703en_US
dc.description.abstractIn conventional sparse representations based dictionary learning algorithms, initial dictionaries are generally assumed to be proper representatives of the system at hand. However, this may not be the case, especially in some systems restricted to random initialization. Therefore, a supposedly optimal state-update based on such an improper model might lead to undesired effects that will be conveyed to successive learning iterations. In this paper, we propose a dictionary learning method which includes a general error-correction process that codes the residual left over from a less intensive initial learning attempt and then adjusts the sparse codes accordingly. Experimental observations show that such additional step vastly improves rates of convergence in high-dimensional cases, also results in better converged states in the case of random initialization. Improvements also scale up with more lenient sparsity constraints. © 2017 IEEE.en_US
dc.language.isotren_US
dc.publisherInstitute of Electrical and Electronics Engineers Inc.en_US
dc.relation.ispartof2017 25th Signal Processing and Communications Applications Conference, SIU 2017en_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectdictionary learningen_US
dc.subjectresidual codesen_US
dc.subjectsparse approximationen_US
dc.subjectSparse codingen_US
dc.subjectCodes (symbols)en_US
dc.subjectLearning algorithmsen_US
dc.subjectSignal processingen_US
dc.subjectDictionary learningen_US
dc.subjectDictionary learning algorithmsen_US
dc.subjectRates of convergenceen_US
dc.subjectresidual codesen_US
dc.subjectSparse approximationsen_US
dc.subjectSparse codingen_US
dc.subjectSparse representationen_US
dc.subjectSparsity constraintsen_US
dc.subjectEducationen_US
dc.titleDictionary learning with residual codesen_US
dc.title.alternativeArtik Nicellerle Sözlük Ö?renimien_US
dc.typeConference Objecten_US
dc.identifier.doi10.1109/SIU.2017.7960168-
dc.identifier.scopus2-s2.0-85026326077en_US
dc.authorscopusid56560191100-
dc.identifier.wosWOS:000413813100032en_US
dc.relation.publicationcategoryKonferans Öğesi - Uluslararası - Kurum Öğretim Elemanıen_US
dc.identifier.scopusqualityN/A-
dc.identifier.wosqualityN/A-
item.grantfulltextreserved-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.cerifentitytypePublications-
item.openairetypeConference Object-
item.fulltextWith Fulltext-
item.languageiso639-1tr-
crisitem.author.dept05.10. Mechanical Engineering-
Appears in Collections:Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collection
Files in This Item:
File SizeFormat 
2690.pdf
  Restricted Access
257 kBAdobe PDFView/Open    Request a copy
Show simple item record



CORE Recommender

SCOPUSTM   
Citations

1
checked on Sep 25, 2024

WEB OF SCIENCETM
Citations

1
checked on Sep 25, 2024

Page view(s)

64
checked on Sep 30, 2024

Download(s)

6
checked on Sep 30, 2024

Google ScholarTM

Check




Altmetric


Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.