Please use this identifier to cite or link to this item:
https://hdl.handle.net/20.500.14279/19335
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Partaourides, Harris | - |
dc.contributor.author | Papadamou, Kostantinos | - |
dc.contributor.author | Kourtellis, Nicolas | - |
dc.contributor.author | Leontiades, Ilias | - |
dc.contributor.author | Chatzis, Sotirios P. | - |
dc.date.accessioned | 2020-11-09T08:09:04Z | - |
dc.date.available | 2020-11-09T08:09:04Z | - |
dc.date.issued | 2020-05-14 | - |
dc.identifier.citation | IEEE International Conference on Acoustics, Speech and Signal Processing, 4-8 May 2020, Barcelona, Spain | en_US |
dc.identifier.isbn | 978-1-5090-6631-5 | - |
dc.identifier.uri | https://hdl.handle.net/20.500.14279/19335 | - |
dc.description.abstract | Attention networks constitute the state-of-the-art paradigm for capturing long temporal dynamics. This paper examines the efficacy of this paradigm in the challenging task of emotion recognition in dyadic conversations. In this work, we introduce a novel attention mechanism capable of inferring the immensity of the effect of each past utterance on the current speaker emotional state. The proposed self-attention network captures the correlation patterns among consecutive encoder network states, thus enabling the robust and effective modeling of temporal dynamics over arbitrary long temporal horizons. We exhibit the effectiveness of our approach considering the challenging IEMOCAP benchmark. We show that, our devised methodology outperforms state-of-the-art alternatives and commonly used approaches, giving rise to promising new research directions in the context of Online Social Network (OSN) analysis tasks. | en_US |
dc.format | en_US | |
dc.language.iso | en | en_US |
dc.rights | © IEEE. | en_US |
dc.rights.uri | http://creativecommons.org/licenses/by-nc-nd/4.0/ | * |
dc.subject | Deep Learning | en_US |
dc.subject | Emotion Recognition | en_US |
dc.subject | Self-Attention | en_US |
dc.title | A Self-Attentive Emotion Recognition Network | en_US |
dc.type | Conference Papers | en_US |
dc.collaboration | Cyprus University of Technology | en_US |
dc.collaboration | Telefonica Research | en_US |
dc.collaboration | Samsung | en_US |
dc.subject.category | Computer and Information Sciences | en_US |
dc.country | Cyprus | en_US |
dc.subject.field | Natural Sciences | en_US |
dc.publication | Peer Reviewed | en_US |
dc.relation.conference | IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) | en_US |
dc.identifier.doi | 10.1109/ICASSP40776.2020.9054762 | en_US |
cut.common.academicyear | 2019-2020 | en_US |
item.openairetype | conferenceObject | - |
item.cerifentitytype | Publications | - |
item.fulltext | No Fulltext | - |
item.grantfulltext | none | - |
item.openairecristype | http://purl.org/coar/resource_type/c_c94f | - |
item.languageiso639-1 | en | - |
crisitem.author.dept | Department of Electrical Engineering, Computer Engineering and Informatics | - |
crisitem.author.dept | Department of Electrical Engineering, Computer Engineering and Informatics | - |
crisitem.author.faculty | Faculty of Engineering and Technology | - |
crisitem.author.faculty | Faculty of Engineering and Technology | - |
crisitem.author.orcid | 0000-0002-8555-260X | - |
crisitem.author.orcid | 0000-0002-4956-4013 | - |
crisitem.author.parentorg | Faculty of Engineering and Technology | - |
crisitem.author.parentorg | Faculty of Engineering and Technology | - |
Appears in Collections: | Δημοσιεύσεις σε συνέδρια /Conference papers or poster or presentation |
CORE Recommender
SCOPUSTM
Citations
5
4
checked on Nov 6, 2023
Page view(s) 5
314
Last Week
0
0
Last month
4
4
checked on Jan 30, 2025
Google ScholarTM
Check
Altmetric
This item is licensed under a Creative Commons License