Παρακαλώ χρησιμοποιήστε αυτό το αναγνωριστικό για να παραπέμψετε ή να δημιουργήσετε σύνδεσμο προς αυτό το τεκμήριο: https://hdl.handle.net/20.500.14279/9536
Πεδίο DCΤιμήΓλώσσα
dc.contributor.authorBurnap, Alex-
dc.contributor.authorRen, Yi-
dc.contributor.authorGerth, Richard-
dc.contributor.authorPapazoglou, Giannis-
dc.contributor.authorGonzalez, Richard-
dc.contributor.authorPapalambros, Panos Y.-
dc.date.accessioned2017-02-08T09:47:50Z-
dc.date.available2017-02-08T09:47:50Z-
dc.date.issued2015-03-01-
dc.identifier.citationJournal of Mechanical Design, Transactions of the ASME, 2015, vol. 137, no. 3, pp. 1-9en_US
dc.identifier.issn10500472-
dc.identifier.urihttps://hdl.handle.net/20.500.14279/9536-
dc.description.abstractCrowdsourced evaluation is a promising method of evaluating engineering design attributes that require human input. The challenge is to correctly estimate scores using a massive and diverse crowd, particularly when only a small subset of evaluators has the expertise to give correct evaluations. Since averaging evaluations across all evaluators will result in an inaccurate crowd evaluation, this paper benchmarks a crowd consensus model that aims to identify experts such that their evaluations may be given more weight. Simulation results indicate this crowd consensus model outperforms averaging when it correctly identifies experts in the crowd, under the assumption that only experts have consistent evaluations. However, empirical results from a real human crowd indicate this assumption may not hold even on a simple engineering design evaluation task, as clusters of consistently wrong evaluators are shown to exist along with the cluster of experts. This suggests that both averaging evaluations and a crowd consensus model that relies only on evaluations may not be adequate for engineering design tasks, accordingly calling for further research into methods of finding experts within the crowd.en_US
dc.formatpdfen_US
dc.language.isoenen_US
dc.relation.ispartofJournal of Mechanical Design, Transactions of the ASMEen_US
dc.rights© ASMEen_US
dc.subjectCrowd consensusen_US
dc.subjectCrowdsourcingen_US
dc.subjectDesign evaluationen_US
dc.subjectEvaluator expertiseen_US
dc.titleWhen crowdsourcing fails: A study of expertise on crowdsourced design evaluationen_US
dc.typeArticleen_US
dc.doi10.1115/1.4029065en_US
dc.collaborationUniversity of Michiganen_US
dc.collaborationTARDEC-NACen_US
dc.collaborationCyprus University of Technologyen_US
dc.subject.categoryElectrical Engineering - Electronic Engineering - Information Engineeringen_US
dc.journalsSubscriptionen_US
dc.countryCyprusen_US
dc.countryUnited Statesen_US
dc.subject.fieldEngineering and Technologyen_US
dc.publicationPeer Revieweden_US
dc.identifier.doi10.1115/1.4029065en_US
dc.relation.issue3en_US
dc.relation.volume137en_US
cut.common.academicyear2015-2016en_US
dc.identifier.epage9en_US
item.fulltextNo Fulltext-
item.languageiso639-1en-
item.grantfulltextnone-
item.openairecristypehttp://purl.org/coar/resource_type/c_6501-
item.cerifentitytypePublications-
item.openairetypearticle-
crisitem.author.deptDepartment of Mechanical Engineering and Materials Science and Engineering-
crisitem.author.facultyFaculty of Engineering and Technology-
crisitem.author.parentorgFaculty of Engineering and Technology-
Εμφανίζεται στις συλλογές:Άρθρα/Articles
CORE Recommender
Δείξε τη σύντομη περιγραφή του τεκμηρίου

SCOPUSTM   
Citations

49
checked on 9 Νοε 2023

WEB OF SCIENCETM
Citations

42
Last Week
0
Last month
0
checked on 29 Οκτ 2023

Page view(s)

479
Last Week
1
Last month
30
checked on 14 Μαρ 2025

Google ScholarTM

Check

Altmetric


Όλα τα τεκμήρια του δικτυακού τόπου προστατεύονται από πνευματικά δικαιώματα