Please use this identifier to cite or link to this item:
https://hdl.handle.net/20.500.14365/1445
Title: | Speech emotion recognition: Emotional models, databases, features, preprocessing methods, supporting modalities, and classifiers | Authors: | Akcay, Mehmet Berkehan Oguz, Kaya |
Keywords: | Speech emotion recognition Survey Speech features Classification Speech databases Voice Quality Communicating Emotion Spectral Features Neural-Networks Classification Valence Expression Arousal Adversarial Audio |
Publisher: | Elsevier | Abstract: | Speech is the most natural way of expressing ourselves as humans. It is only natural then to extend this communication medium to computer applications. We define speech emotion recognition (SER) systems as a collection of methodologies that process and classify speech signals to detect the embedded emotions. SER is not a new field, it has been around for over two decades, and has regained attention thanks to the recent advancements. These novel studies make use of the advances in all fields of computing and technology, making it necessary to have an update on the current methodologies and techniques that make SER possible. We have identified and discussed distinct areas of SER, provided a detailed survey of current literature of each, and also listed the current challenges. | URI: | https://doi.org/10.1016/j.specom.2019.12.001 https://hdl.handle.net/20.500.14365/1445 |
ISSN: | 0167-6393 1872-7182 |
Appears in Collections: | Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection WoS İndeksli Yayınlar Koleksiyonu / WoS Indexed Publications Collection |
Files in This Item:
File | Size | Format | |
---|---|---|---|
492.pdf Restricted Access | 1.16 MB | Adobe PDF | View/Open Request a copy |
CORE Recommender
SCOPUSTM
Citations
509
checked on Nov 20, 2024
WEB OF SCIENCETM
Citations
323
checked on Nov 20, 2024
Page view(s)
130
checked on Nov 18, 2024
Download(s)
6
checked on Nov 18, 2024
Google ScholarTM
Check
Altmetric
Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.