Please use this identifier to cite or link to this item: https://hdl.handle.net/20.500.14279/28637
DC FieldValueLanguage
dc.contributor.authorPerikleous, Periklis-
dc.contributor.authorKafkalias, Andreas-
dc.contributor.authorTheodosiou, Zenonas-
dc.contributor.authorBarlas, Pinar-
dc.contributor.authorChristoforou, Evgenia-
dc.contributor.authorOtterbacher, Jahna-
dc.contributor.authorDemartini, Gianluca-
dc.contributor.authorLanitis, Andreas-
dc.date.accessioned2023-03-20T19:57:06Z-
dc.date.available2023-03-20T19:57:06Z-
dc.date.issued2022-10-17-
dc.identifier.citation31st ACM International Conference on Information & Knowledge Management, 2022, 17–22 October, Atlanta, Georgia, USAen_US
dc.identifier.isbn9781450392365-
dc.identifier.urihttps://hdl.handle.net/20.500.14279/28637-
dc.description.abstractIt is increasingly easy for interested parties to play a role in the development of predictive algorithms, with a range of available tools and platforms for building datasets, as well as for training and evaluating machine learning (ML) models. For this reason, it is essential to create awareness among practitioners on the ethical challenges, such as the presence of social bias in training data. We present RECANT (Raising Awareness of Social Bias in Crowdsourced Training Data), a tool that allows users to explore the behaviors of four biometric models - predicting the gender and race, as well as the perceived attractiveness and trustworthiness, of the person depicted in an input image. These models have been trained on a crowdsourced dataset of passport-style people images, where crowd annotators described attributes of the images, and reported their own demographic characteristics. With RECANT, users can explore the correct and wrong predictions made by each model, when using different subsets of the data in training, based on annotator attributes. We present its features, along with sample exercises, as a hands-on tool for raising awareness of potential pitfalls in data practices surrounding ML.en_US
dc.formatpdfen_US
dc.language.isoenen_US
dc.rightsThis work is licensed under a Creative Commons Attribution International 4.0 License.en_US
dc.subjectAlgorithmic biasen_US
dc.subjectBiometricsen_US
dc.subjectCrowdsourcingen_US
dc.subjectData biasen_US
dc.subjectEducationen_US
dc.titleHow Does the Crowd Impact the Model? A Tool for Raising Awareness of Social Bias in Crowdsourced Training Dataen_US
dc.typeConference Papersen_US
dc.collaborationCYENS - Centre of Excellenceen_US
dc.collaborationUniversity of Queenslanden_US
dc.subject.categoryElectrical Engineering - Electronic Engineering - Information Engineeringen_US
dc.countryCyprusen_US
dc.countryAustraliaen_US
dc.subject.fieldEngineering and Technologyen_US
dc.publicationPeer Revieweden_US
dc.relation.conferenceACM International on Conference on Information and Knowledge Managementen_US
dc.identifier.doi10.1145/3511808.3557178en_US
dc.identifier.scopus2-s2.0-85140847674-
dc.identifier.urlhttps://api.elsevier.com/content/abstract/scopus_id/85140847674-
cut.common.academicyear2022-2023en_US
item.fulltextWith Fulltext-
item.cerifentitytypePublications-
item.grantfulltextopen-
item.openairecristypehttp://purl.org/coar/resource_type/c_c94f-
item.openairetypeconferenceObject-
item.languageiso639-1en-
crisitem.author.deptDepartment of Communication and Internet Studies-
crisitem.author.deptDepartment of Multimedia and Graphic Arts-
crisitem.author.facultyFaculty of Communication and Media Studies-
crisitem.author.facultyFaculty of Fine and Applied Arts-
crisitem.author.orcid0000-0003-3168-2350-
crisitem.author.orcid0000-0001-6841-8065-
crisitem.author.parentorgFaculty of Communication and Media Studies-
crisitem.author.parentorgFaculty of Fine and Applied Arts-
Appears in Collections:Δημοσιεύσεις σε συνέδρια /Conference papers or poster or presentation
Files in This Item:
File SizeFormat
3511808.3557178.pdf1.4 MBAdobe PDFView/Open
CORE Recommender
Show simple item record

SCOPUSTM   
Citations 5

1
checked on Nov 6, 2023

Page view(s)

147
Last Week
1
Last month
9
checked on May 11, 2024

Download(s) 5

46
checked on May 11, 2024

Google ScholarTM

Check

Altmetric


Items in KTISIS are protected by copyright, with all rights reserved, unless otherwise indicated.