Repository logoGCRIS
  • English
  • Türkçe
  • Русский
Log In
New user? Click here to register. Have you forgotten your password?
Home
Communities
Browse GCRIS
Entities
Overview
GCRIS Guide
  1. Home
  2. Browse by Author

Browsing by Author "Ozkan C."

Filter results by typing the first few letters
Now showing 1 - 1 of 1
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    Conference Object
    Citation - Scopus: 2
    Selecting Emotion Specific Speech Features To Distinguish One Emotion From Others
    (Institute of Electrical and Electronics Engineers Inc., 2021) Ozkan C.; Oguz K.
    Speech is one of the most studied modalities of emotion recognition. Most studies use one or more labeled data sets that contain multiple emotions to extract and select speech features to be trained by machine learning algorithms. Instead of this multi-class approach, our study focuses on selecting features that most distinguish an emotion from others. This requires a one-against-all (OAA) binary classification approach. The features that are extracted and selected for the multi-class case is compared to features extracted for seven one-against-all cases using a standard backpropagation feedforward neural network (BFNN). The results while OAA distinguishes some of the emotions better than the multi-class BFNN configurations, this is not true for all cases. However, when multi-class BFNN is tested with all emotions, the error rate is as high as 16.48. © 2021 IEEE.
Repository logo
Collections
  • Scopus Collection
  • WoS Collection
  • TrDizin Collection
  • PubMed Collection
Entities
  • Research Outputs
  • Organizations
  • Researchers
  • Projects
  • Awards
  • Equipments
  • Events
About
  • Contact
  • GCRIS
  • Research Ecosystems
  • Feedback
  • OAI-PMH

Log in to GCRIS Dashboard

GCRIS Mobile

Download GCRIS Mobile on the App StoreGet GCRIS Mobile on Google Play

Powered by Research Ecosystems

  • Privacy policy
  • End User Agreement
  • Feedback