Ozkan C.Oguz K.2023-06-162023-06-1620219.78E+12https://doi.org/10.1109/INISTA52262.2021.9548533https://hdl.handle.net/20.500.14365/3568Kocaeli University;Kocaeli University Technopark2021 International Conference on INnovations in Intelligent SysTems and Applications, INISTA 2021 -- 25 August 2021 through 27 August 2021 -- 172175Speech is one of the most studied modalities of emotion recognition. Most studies use one or more labeled data sets that contain multiple emotions to extract and select speech features to be trained by machine learning algorithms. Instead of this multi-class approach, our study focuses on selecting features that most distinguish an emotion from others. This requires a one-against-all (OAA) binary classification approach. The features that are extracted and selected for the multi-class case is compared to features extracted for seven one-against-all cases using a standard backpropagation feedforward neural network (BFNN). The results while OAA distinguishes some of the emotions better than the multi-class BFNN configurations, this is not true for all cases. However, when multi-class BFNN is tested with all emotions, the error rate is as high as 16.48. © 2021 IEEE.eninfo:eu-repo/semantics/closedAccessArtificial neural networkFeature selectionSpeech emotion recognitionFeedforward neural networksLearning algorithmsMachine learningSpeechSpeech recognitionBinary Classification ApproachData setEmotion recognitionError rateFeatures selectionLabeled dataMachine learning algorithmsNeural network configurationsSpeech emotion recognitionSpeech featuresFeature extractionSelecting Emotion Specific Speech Features To Distinguish One Emotion From OthersConference Object10.1109/INISTA52262.2021.95485332-s2.0-85116665979