Παρακαλώ χρησιμοποιήστε αυτό το αναγνωριστικό για να παραπέμψετε ή να δημιουργήσετε σύνδεσμο προς αυτό το τεκμήριο:
https://hdl.handle.net/20.500.14279/9536
Πεδίο DC | Τιμή | Γλώσσα |
---|---|---|
dc.contributor.author | Burnap, Alex | - |
dc.contributor.author | Ren, Yi | - |
dc.contributor.author | Gerth, Richard | - |
dc.contributor.author | Papazoglou, Giannis | - |
dc.contributor.author | Gonzalez, Richard | - |
dc.contributor.author | Papalambros, Panos Y. | - |
dc.date.accessioned | 2017-02-08T09:47:50Z | - |
dc.date.available | 2017-02-08T09:47:50Z | - |
dc.date.issued | 2015-03-01 | - |
dc.identifier.citation | Journal of Mechanical Design, Transactions of the ASME, 2015, vol. 137, no. 3, pp. 1-9 | en_US |
dc.identifier.issn | 10500472 | - |
dc.identifier.uri | https://hdl.handle.net/20.500.14279/9536 | - |
dc.description.abstract | Crowdsourced evaluation is a promising method of evaluating engineering design attributes that require human input. The challenge is to correctly estimate scores using a massive and diverse crowd, particularly when only a small subset of evaluators has the expertise to give correct evaluations. Since averaging evaluations across all evaluators will result in an inaccurate crowd evaluation, this paper benchmarks a crowd consensus model that aims to identify experts such that their evaluations may be given more weight. Simulation results indicate this crowd consensus model outperforms averaging when it correctly identifies experts in the crowd, under the assumption that only experts have consistent evaluations. However, empirical results from a real human crowd indicate this assumption may not hold even on a simple engineering design evaluation task, as clusters of consistently wrong evaluators are shown to exist along with the cluster of experts. This suggests that both averaging evaluations and a crowd consensus model that relies only on evaluations may not be adequate for engineering design tasks, accordingly calling for further research into methods of finding experts within the crowd. | en_US |
dc.format | en_US | |
dc.language.iso | en | en_US |
dc.relation.ispartof | Journal of Mechanical Design, Transactions of the ASME | en_US |
dc.rights | © ASME | en_US |
dc.subject | Crowd consensus | en_US |
dc.subject | Crowdsourcing | en_US |
dc.subject | Design evaluation | en_US |
dc.subject | Evaluator expertise | en_US |
dc.title | When crowdsourcing fails: A study of expertise on crowdsourced design evaluation | en_US |
dc.type | Article | en_US |
dc.doi | 10.1115/1.4029065 | en_US |
dc.collaboration | University of Michigan | en_US |
dc.collaboration | TARDEC-NAC | en_US |
dc.collaboration | Cyprus University of Technology | en_US |
dc.subject.category | Electrical Engineering - Electronic Engineering - Information Engineering | en_US |
dc.journals | Subscription | en_US |
dc.country | Cyprus | en_US |
dc.country | United States | en_US |
dc.subject.field | Engineering and Technology | en_US |
dc.publication | Peer Reviewed | en_US |
dc.identifier.doi | 10.1115/1.4029065 | en_US |
dc.relation.issue | 3 | en_US |
dc.relation.volume | 137 | en_US |
cut.common.academicyear | 2015-2016 | en_US |
dc.identifier.epage | 9 | en_US |
item.fulltext | No Fulltext | - |
item.languageiso639-1 | en | - |
item.grantfulltext | none | - |
item.openairecristype | http://purl.org/coar/resource_type/c_6501 | - |
item.cerifentitytype | Publications | - |
item.openairetype | article | - |
crisitem.author.dept | Department of Mechanical Engineering and Materials Science and Engineering | - |
crisitem.author.faculty | Faculty of Engineering and Technology | - |
crisitem.author.parentorg | Faculty of Engineering and Technology | - |
Εμφανίζεται στις συλλογές: | Άρθρα/Articles |
CORE Recommender
SCOPUSTM
Citations
49
checked on 9 Νοε 2023
WEB OF SCIENCETM
Citations
42
Last Week
0
0
Last month
0
0
checked on 29 Οκτ 2023
Page view(s)
479
Last Week
1
1
Last month
30
30
checked on 14 Μαρ 2025
Google ScholarTM
Check
Altmetric
Όλα τα τεκμήρια του δικτυακού τόπου προστατεύονται από πνευματικά δικαιώματα