Please use this identifier to cite or link to this item:
https://hdl.handle.net/20.500.14365/3568
Full metadata record
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Ozkan C. | - |
dc.contributor.author | Oguz K. | - |
dc.date.accessioned | 2023-06-16T15:00:49Z | - |
dc.date.available | 2023-06-16T15:00:49Z | - |
dc.date.issued | 2021 | - |
dc.identifier.isbn | 9.78167E+12 | - |
dc.identifier.uri | https://doi.org/10.1109/INISTA52262.2021.9548533 | - |
dc.identifier.uri | https://hdl.handle.net/20.500.14365/3568 | - |
dc.description | Kocaeli University;Kocaeli University Technopark | en_US |
dc.description | 2021 International Conference on INnovations in Intelligent SysTems and Applications, INISTA 2021 -- 25 August 2021 through 27 August 2021 -- 172175 | en_US |
dc.description.abstract | Speech is one of the most studied modalities of emotion recognition. Most studies use one or more labeled data sets that contain multiple emotions to extract and select speech features to be trained by machine learning algorithms. Instead of this multi-class approach, our study focuses on selecting features that most distinguish an emotion from others. This requires a one-against-all (OAA) binary classification approach. The features that are extracted and selected for the multi-class case is compared to features extracted for seven one-against-all cases using a standard backpropagation feedforward neural network (BFNN). The results while OAA distinguishes some of the emotions better than the multi-class BFNN configurations, this is not true for all cases. However, when multi-class BFNN is tested with all emotions, the error rate is as high as 16.48. © 2021 IEEE. | en_US |
dc.language.iso | en | en_US |
dc.publisher | Institute of Electrical and Electronics Engineers Inc. | en_US |
dc.relation.ispartof | 2021 International Conference on INnovations in Intelligent SysTems and Applications, INISTA 2021 - Proceedings | en_US |
dc.rights | info:eu-repo/semantics/closedAccess | en_US |
dc.subject | Artificial neural network | en_US |
dc.subject | Feature selection | en_US |
dc.subject | Speech emotion recognition | en_US |
dc.subject | Feedforward neural networks | en_US |
dc.subject | Learning algorithms | en_US |
dc.subject | Machine learning | en_US |
dc.subject | Speech | en_US |
dc.subject | Speech recognition | en_US |
dc.subject | Binary Classification Approach | en_US |
dc.subject | Data set | en_US |
dc.subject | Emotion recognition | en_US |
dc.subject | Error rate | en_US |
dc.subject | Features selection | en_US |
dc.subject | Labeled data | en_US |
dc.subject | Machine learning algorithms | en_US |
dc.subject | Neural network configurations | en_US |
dc.subject | Speech emotion recognition | en_US |
dc.subject | Speech features | en_US |
dc.subject | Feature extraction | en_US |
dc.title | Selecting emotion specific speech features to distinguish one emotion from others | en_US |
dc.type | Conference Object | en_US |
dc.identifier.doi | 10.1109/INISTA52262.2021.9548533 | - |
dc.identifier.scopus | 2-s2.0-85116665979 | en_US |
dc.authorscopusid | 57289727300 | - |
dc.relation.publicationcategory | Konferans Öğesi - Uluslararası - Kurum Öğretim Elemanı | en_US |
dc.identifier.scopusquality | N/A | - |
dc.identifier.wosquality | N/A | - |
item.cerifentitytype | Publications | - |
item.openairecristype | http://purl.org/coar/resource_type/c_18cf | - |
item.grantfulltext | reserved | - |
item.fulltext | With Fulltext | - |
item.languageiso639-1 | en | - |
item.openairetype | Conference Object | - |
crisitem.author.dept | 05.05. Computer Engineering | - |
Appears in Collections: | Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection |
Files in This Item:
File | Size | Format | |
---|---|---|---|
2659.pdf Restricted Access | 960.1 kB | Adobe PDF | View/Open Request a copy |
CORE Recommender
SCOPUSTM
Citations
2
checked on Nov 20, 2024
Page view(s)
64
checked on Nov 18, 2024
Download(s)
8
checked on Nov 18, 2024
Google ScholarTM
Check
Altmetric
Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.