Oktar Y.Türkan, Mehmet2023-06-162023-06-1620179.78E+12https://doi.org/10.1109/SIU.2017.7960168https://hdl.handle.net/20.500.14365/360125th Signal Processing and Communications Applications Conference, SIU 2017 -- 15 May 2017 through 18 May 2017 -- 128703In conventional sparse representations based dictionary learning algorithms, initial dictionaries are generally assumed to be proper representatives of the system at hand. However, this may not be the case, especially in some systems restricted to random initialization. Therefore, a supposedly optimal state-update based on such an improper model might lead to undesired effects that will be conveyed to successive learning iterations. In this paper, we propose a dictionary learning method which includes a general error-correction process that codes the residual left over from a less intensive initial learning attempt and then adjusts the sparse codes accordingly. Experimental observations show that such additional step vastly improves rates of convergence in high-dimensional cases, also results in better converged states in the case of random initialization. Improvements also scale up with more lenient sparsity constraints. © 2017 IEEE.trinfo:eu-repo/semantics/closedAccessdictionary learningresidual codessparse approximationSparse codingCodes (symbols)Learning algorithmsSignal processingDictionary learningDictionary learning algorithmsRates of convergenceresidual codesSparse approximationsSparse codingSparse representationSparsity constraintsEducationDictionary Learning With Residual CodesArtik Nicellerle Sözlük Ö?renimiConference Object10.1109/SIU.2017.79601682-s2.0-85026326077