Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.14279/19400
DC FieldValueLanguage
dc.contributor.authorTheodosiou, Zenonas-
dc.contributor.authorPartaourides, Harris-
dc.contributor.authorTolga, Atun-
dc.contributor.authorPanayi, Simoni-
dc.contributor.authorLanitis, Andreas-
dc.date.accessioned2020-11-13T10:50:38Z-
dc.date.available2020-11-13T10:50:38Z-
dc.date.issued2020-04-10-
dc.identifier.citation15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications, 27-29 February 2020, Valletta, Maltaen_US
dc.identifier.isbn978-989758402-2-
dc.identifier.urihttps://hdl.handle.net/20.500.14279/19400-
dc.description.abstractEgocentric vision, which relates to the continuous interpretation of images captured by wearable cameras, is increasingly being utilized in several applications to enhance the quality of citizens life, especially for those with visual or motion impairments. The development of sophisticated egocentric computer vision techniques requires automatic analysis of large databases of first-person point of view visual data collected through wearable devices. In this paper, we present our initial findings regarding the use of wearable cameras for enhancing the pedestrians safety while walking in city sidewalks. For this purpose, we create a first-person database that entails annotations on common barriers that may put pedestrians in danger. Furthermore, we derive a framework for collecting visual lifelogging data and define 24 different categories of sidewalk barriers. Our dataset consists of 1796 annotated images covering 1969 instances of barriers. The analysis of the dataset by means of object classification algorithms, depict encouraging results for further study.en_US
dc.formatpdfen_US
dc.language.isoenen_US
dc.rights© SCITEPRESS CC BY-NC-ND 4.0en_US
dc.rights.urihttp://creativecommons.org/licenses/by-nc-nd/4.0/*
dc.subjectDataseten_US
dc.subjectEgocentric Visionen_US
dc.subjectFirst-person Viewen_US
dc.subjectPedestriansen_US
dc.subjectSafetyen_US
dc.subjectVisual Lifeloggingen_US
dc.titleA first-person database for detecting barriers for pedestriansen_US
dc.typeConference Papersen_US
dc.linkhttps://www.scitepress.org/PublicationsDetail.aspx?ID=n9imSw1d0GY%3d&t=1en_US
dc.collaborationResearch Center on Interactive Media, Smart Systems and Emerging Technologiesen_US
dc.collaborationCyprus University of Technologyen_US
dc.subject.categoryComputer and Information Sciencesen_US
dc.countryCyprusen_US
dc.subject.fieldNatural Sciencesen_US
dc.publicationPeer Revieweden_US
dc.relation.conferenceInternational Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applicationsen_US
cut.common.academicyear2019-2020en_US
item.grantfulltextopen-
item.languageiso639-1en-
item.cerifentitytypePublications-
item.openairecristypehttp://purl.org/coar/resource_type/c_c94f-
item.openairetypeconferenceObject-
item.fulltextWith Fulltext-
crisitem.author.deptDepartment of Communication and Internet Studies-
crisitem.author.deptDepartment of Electrical Engineering, Computer Engineering and Informatics-
crisitem.author.deptDepartment of Multimedia and Graphic Arts-
crisitem.author.facultyFaculty of Communication and Media Studies-
crisitem.author.facultyFaculty of Engineering and Technology-
crisitem.author.facultyFaculty of Fine and Applied Arts-
crisitem.author.orcid0000-0003-3168-2350-
crisitem.author.orcid0000-0002-8555-260X-
crisitem.author.orcid0000-0001-6841-8065-
crisitem.author.parentorgFaculty of Communication and Media Studies-
crisitem.author.parentorgFaculty of Engineering and Technology-
crisitem.author.parentorgFaculty of Fine and Applied Arts-
Appears in Collections:Δημοσιεύσεις σε συνέδρια /Conference papers or poster or presentation
Files in This Item:
File Description SizeFormat
VISAPP_2020_205.pdfFulltext3.05 MBAdobe PDFView/Open
CORE Recommender
Show simple item record

Page view(s) 5

409
Last Week
0
Last month
6
checked on Nov 6, 2024

Download(s) 5

220
checked on Nov 6, 2024

Google ScholarTM

Check

Altmetric


This item is licensed under a Creative Commons License Creative Commons