Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.14279/19335
DC FieldValueLanguage
dc.contributor.authorPartaourides, Harris-
dc.contributor.authorPapadamou, Kostantinos-
dc.contributor.authorKourtellis, Nicolas-
dc.contributor.authorLeontiades, Ilias-
dc.contributor.authorChatzis, Sotirios P.-
dc.date.accessioned2020-11-09T08:09:04Z-
dc.date.available2020-11-09T08:09:04Z-
dc.date.issued2020-05-14-
dc.identifier.citationIEEE International Conference on Acoustics, Speech and Signal Processing, 4-8 May 2020, Barcelona, Spainen_US
dc.identifier.isbn978-1-5090-6631-5-
dc.identifier.urihttps://hdl.handle.net/20.500.14279/19335-
dc.description.abstractAttention networks constitute the state-of-the-art paradigm for capturing long temporal dynamics. This paper examines the efficacy of this paradigm in the challenging task of emotion recognition in dyadic conversations. In this work, we introduce a novel attention mechanism capable of inferring the immensity of the effect of each past utterance on the current speaker emotional state. The proposed self-attention network captures the correlation patterns among consecutive encoder network states, thus enabling the robust and effective modeling of temporal dynamics over arbitrary long temporal horizons. We exhibit the effectiveness of our approach considering the challenging IEMOCAP benchmark. We show that, our devised methodology outperforms state-of-the-art alternatives and commonly used approaches, giving rise to promising new research directions in the context of Online Social Network (OSN) analysis tasks.en_US
dc.formatpdfen_US
dc.language.isoenen_US
dc.rights© IEEE.en_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectDeep Learningen_US
dc.subjectEmotion Recognitionen_US
dc.subjectSelf-Attentionen_US
dc.titleA Self-Attentive Emotion Recognition Networken_US
dc.typeConference Papersen_US
dc.collaborationCyprus University of Technologyen_US
dc.collaborationTelefonica Researchen_US
dc.collaborationSamsungen_US
dc.subject.categoryComputer and Information Sciencesen_US
dc.countryCyprusen_US
dc.subject.fieldNatural Sciencesen_US
dc.publicationPeer Revieweden_US
dc.relation.conferenceIEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)en_US
dc.identifier.doi10.1109/ICASSP40776.2020.9054762en_US
cut.common.academicyear2019-2020en_US
item.openairetypeconferenceObject-
item.cerifentitytypePublications-
item.fulltextNo Fulltext-
item.grantfulltextnone-
item.openairecristypehttp://purl.org/coar/resource_type/c_c94f-
item.languageiso639-1en-
crisitem.author.deptDepartment of Electrical Engineering, Computer Engineering and Informatics-
crisitem.author.deptDepartment of Electrical Engineering, Computer Engineering and Informatics-
crisitem.author.facultyFaculty of Engineering and Technology-
crisitem.author.facultyFaculty of Engineering and Technology-
crisitem.author.orcid0000-0002-8555-260X-
crisitem.author.orcid0000-0002-4956-4013-
crisitem.author.parentorgFaculty of Engineering and Technology-
crisitem.author.parentorgFaculty of Engineering and Technology-
Appears in Collections:Δημοσιεύσεις σε συνέδρια /Conference papers or poster or presentation
CORE Recommender
Show simple item record

SCOPUSTM   
Citations 5

4
checked on Nov 6, 2023

Page view(s) 5

314
Last Week
0
Last month
4
checked on Jan 30, 2025

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons