Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.14279/12013
DC FieldValueLanguage
dc.contributor.authorChatzis, Sotirios P.-
dc.date.accessioned2018-07-17T06:43:26Z-
dc.date.available2018-07-17T06:43:26Z-
dc.date.issued2018-10-27-
dc.identifier.citationNeurocomputing, 2018, vol. 312, pp. 210-217en_US
dc.identifier.issn09252312-
dc.description.abstractHidden Markov models (HMMs) are a popular approach for modeling continuous sequential data, typically based on the assumption of Gaussian-distributed observations. A significant issue HMMs with Gaussian conditional densities are confronted with concerns effectively modeling high-dimensional observations, without getting prone to overfitting or singularities. To this end, one can resort to extracting lower-dimensional latent variable representations of the observed high-dimensional data, as part of the inference algorithm of the postulated HMM. Factor analysis (FA) is a well-established linear latent variable scheme that can be employed for this purpose; its functionality consists in modeling the covariances between the elements of multivariate observations under a set of linear assumptions. Recently, it has been proposed that FA can be effectively generalized under an efficient large-margin Bayesian inference perspective, namely maximum entropy discrimination (MED). This work capitalizes on these recent findings to derive an effective HMM-driven sequential data modeling framework for high-dimensional data. Our proposed approach extracts lower-dimensional latent variable representations of observed high-dimensional data, taking into account the large-margin principle. On this basis, it postulates that the data temporal dynamics are conditional to the inferred values of these latent variables. We devise efficient mean-field inference algorithms for our model, and exhibit its advantages through a set of experiments.en_US
dc.formatpdfen_US
dc.language.isoenen_US
dc.relation.ispartofNeurocomputingen_US
dc.rights© Elsevieren_US
dc.subjectHidden Markov modelsen_US
dc.subjectLarge-margin principleen_US
dc.subjectMaximum-entropy discriminationen_US
dc.subjectMean-field inferenceen_US
dc.subjectLatent variable representationen_US
dc.titleLatent subspace modeling of sequential data under the maximum entropy discrimination frameworken_US
dc.typeArticleen_US
dc.collaborationCyprus University of Technologyen_US
dc.subject.categoryComputer and Information Sciencesen_US
dc.journalsSubscriptionen_US
dc.countryCyprusen_US
dc.subject.fieldEngineering and Technologyen_US
dc.publicationPeer Revieweden_US
dc.identifier.doi10.1016/j.neucom.2018.05.101en_US
dc.relation.volume312en_US
cut.common.academicyear2018-2019en_US
dc.identifier.spage210en_US
dc.identifier.epage217en_US
item.openairecristypehttp://purl.org/coar/resource_type/c_6501-
item.openairetypearticle-
item.cerifentitytypePublications-
item.grantfulltextnone-
item.languageiso639-1en-
item.fulltextNo Fulltext-
crisitem.journal.journalissn0925-2312-
crisitem.journal.publisherElsevier-
crisitem.author.deptDepartment of Electrical Engineering, Computer Engineering and Informatics-
crisitem.author.facultyFaculty of Engineering and Technology-
crisitem.author.orcid0000-0002-4956-4013-
crisitem.author.parentorgFaculty of Engineering and Technology-
Appears in Collections:Άρθρα/Articles
CORE Recommender
Show simple item record

SCOPUSTM   
Citations

2
checked on Nov 6, 2023

WEB OF SCIENCETM
Citations 10

1
Last Week
0
Last month
0
checked on Oct 29, 2023

Page view(s)

379
Last Week
1
Last month
5
checked on Nov 24, 2024

Google ScholarTM

Check

Altmetric


Items in KTISIS are protected by copyright, with all rights reserved, unless otherwise indicated.