Παρακαλώ χρησιμοποιήστε αυτό το αναγνωριστικό για να παραπέμψετε ή να δημιουργήσετε σύνδεσμο προς αυτό το τεκμήριο: https://hdl.handle.net/20.500.14279/8190
Πεδίο DCΤιμήΓλώσσα
dc.contributor.authorChatzis, Sotirios P.-
dc.contributor.otherΧατζής, Σωτήριος Π.-
dc.date.accessioned2016-01-18T08:51:17Z-
dc.date.available2016-01-18T08:51:17Z-
dc.date.issued2015-
dc.identifier.citationEuropean Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, 2015, Porto, Portugal, 7-11 Septemberen_US
dc.identifier.urihttps://hdl.handle.net/20.500.14279/8190-
dc.description.abstractRecurrent neural networks (RNNs) have recently gained renewed attention from the machine learning community as effective methods for modeling variable-length sequences. Language modeling, handwriting recognition, and speech recognition are only few of the application domains where RNN-based models have achieved the state-of- the-art performance currently reported in the literature. Typically, RNN architectures utilize simple linear, logistic, or softmax output layers to perform data modeling and prediction generation. In this work, for the first time in the literature, we consider using a sparse Bayesian regression or classification model as the output layer of RNNs, inspired from the automatic relevance determination (ARD) technique. The notion of ARD is to continually create new components while detecting when a component starts to overfit, where overfit manifests itself as a precision hyperparameter posterior tending to infinity. This way, our method manages to train sparse RNN models, where the number of effective (“active”) recurrently connected hidden units is selected in a data-driven fashion, as part of the model inference procedure. We develop efficient and scalable training algorithms for our model under the stochastic variational inference paradigm, and derive elegant predictive density expressions with computational costs comparable to conventional RNN formulations. We evaluate our approach considering its application to challenging tasks dealing with both regression and classification problems, and exhibit its favorable performance over the state-of-the-art.en_US
dc.formatpdfen_US
dc.language.isoenen_US
dc.subjectRecurrent neural networksen_US
dc.subjectRNNen_US
dc.subjectBayesian regressionen_US
dc.titleSparse bayesian recurrent neural networksen_US
dc.typeConference Papersen_US
dc.collaborationCyprus University of Technologyen_US
dc.subject.categoryComputer and Information Sciencesen_US
dc.reviewPeer Revieweden
dc.countryCyprusen_US
dc.subject.fieldNatural Sciencesen_US
dc.relation.conferenceEuropean Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databasesen_US
dc.dept.handle123456789/134en
cut.common.academicyear2019-2020en_US
item.grantfulltextopen-
item.languageiso639-1en-
item.cerifentitytypePublications-
item.openairecristypehttp://purl.org/coar/resource_type/c_c94f-
item.openairetypeconferenceObject-
item.fulltextWith Fulltext-
crisitem.author.deptDepartment of Electrical Engineering, Computer Engineering and Informatics-
crisitem.author.facultyFaculty of Engineering and Technology-
crisitem.author.orcid0000-0002-4956-4013-
crisitem.author.parentorgFaculty of Engineering and Technology-
Εμφανίζεται στις συλλογές:Δημοσιεύσεις σε συνέδρια /Conference papers or poster or presentation
Αρχεία σε αυτό το τεκμήριο:
Αρχείο Περιγραφή ΜέγεθοςΜορφότυπος
Chatzis.pdf176.51 kBAdobe PDFΔείτε/ Ανοίξτε
CORE Recommender
Δείξε τη σύντομη περιγραφή του τεκμηρίου

Page view(s) 50

396
Last Week
0
Last month
3
checked on 6 Νοε 2024

Download(s) 50

555
checked on 6 Νοε 2024

Google ScholarTM

Check


Όλα τα τεκμήρια του δικτυακού τόπου προστατεύονται από πνευματικά δικαιώματα