Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.14365/3568
Full metadata record
DC FieldValueLanguage
dc.contributor.authorOzkan C.-
dc.contributor.authorOguz K.-
dc.date.accessioned2023-06-16T15:00:49Z-
dc.date.available2023-06-16T15:00:49Z-
dc.date.issued2021-
dc.identifier.isbn9.78167E+12-
dc.identifier.urihttps://doi.org/10.1109/INISTA52262.2021.9548533-
dc.identifier.urihttps://hdl.handle.net/20.500.14365/3568-
dc.descriptionKocaeli University;Kocaeli University Technoparken_US
dc.description2021 International Conference on INnovations in Intelligent SysTems and Applications, INISTA 2021 -- 25 August 2021 through 27 August 2021 -- 172175en_US
dc.description.abstractSpeech is one of the most studied modalities of emotion recognition. Most studies use one or more labeled data sets that contain multiple emotions to extract and select speech features to be trained by machine learning algorithms. Instead of this multi-class approach, our study focuses on selecting features that most distinguish an emotion from others. This requires a one-against-all (OAA) binary classification approach. The features that are extracted and selected for the multi-class case is compared to features extracted for seven one-against-all cases using a standard backpropagation feedforward neural network (BFNN). The results while OAA distinguishes some of the emotions better than the multi-class BFNN configurations, this is not true for all cases. However, when multi-class BFNN is tested with all emotions, the error rate is as high as 16.48. © 2021 IEEE.en_US
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineers Inc.en_US
dc.relation.ispartof2021 International Conference on INnovations in Intelligent SysTems and Applications, INISTA 2021 - Proceedingsen_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectArtificial neural networken_US
dc.subjectFeature selectionen_US
dc.subjectSpeech emotion recognitionen_US
dc.subjectFeedforward neural networksen_US
dc.subjectLearning algorithmsen_US
dc.subjectMachine learningen_US
dc.subjectSpeechen_US
dc.subjectSpeech recognitionen_US
dc.subjectBinary Classification Approachen_US
dc.subjectData seten_US
dc.subjectEmotion recognitionen_US
dc.subjectError rateen_US
dc.subjectFeatures selectionen_US
dc.subjectLabeled dataen_US
dc.subjectMachine learning algorithmsen_US
dc.subjectNeural network configurationsen_US
dc.subjectSpeech emotion recognitionen_US
dc.subjectSpeech featuresen_US
dc.subjectFeature extractionen_US
dc.titleSelecting emotion specific speech features to distinguish one emotion from othersen_US
dc.typeConference Objecten_US
dc.identifier.doi10.1109/INISTA52262.2021.9548533-
dc.identifier.scopus2-s2.0-85116665979en_US
dc.authorscopusid57289727300-
dc.relation.publicationcategoryKonferans Öğesi - Uluslararası - Kurum Öğretim Elemanıen_US
dc.identifier.scopusqualityN/A-
dc.identifier.wosqualityN/A-
item.cerifentitytypePublications-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.grantfulltextreserved-
item.fulltextWith Fulltext-
item.languageiso639-1en-
item.openairetypeConference Object-
crisitem.author.dept05.05. Computer Engineering-
Appears in Collections:Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
Files in This Item:
File SizeFormat 
2659.pdf
  Restricted Access
960.1 kBAdobe PDFView/Open    Request a copy
Show simple item record



CORE Recommender

SCOPUSTM   
Citations

2
checked on Nov 20, 2024

Page view(s)

64
checked on Nov 18, 2024

Download(s)

8
checked on Nov 18, 2024

Google ScholarTM

Check




Altmetric


Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.