Please use this identifier to cite or link to this item:
https://hdl.handle.net/20.500.14279/9536
DC Field | Value | Language |
---|---|---|
dc.contributor.author | Burnap, Alex | - |
dc.contributor.author | Ren, Yi | - |
dc.contributor.author | Gerth, Richard | - |
dc.contributor.author | Papazoglou, Giannis | - |
dc.contributor.author | Gonzalez, Richard | - |
dc.contributor.author | Papalambros, Panos Y. | - |
dc.date.accessioned | 2017-02-08T09:47:50Z | - |
dc.date.available | 2017-02-08T09:47:50Z | - |
dc.date.issued | 2015-03-01 | - |
dc.identifier.citation | Journal of Mechanical Design, Transactions of the ASME, 2015, vol. 137, no. 3, pp. 1-9 | en_US |
dc.identifier.issn | 10500472 | - |
dc.identifier.uri | https://hdl.handle.net/20.500.14279/9536 | - |
dc.description.abstract | Crowdsourced evaluation is a promising method of evaluating engineering design attributes that require human input. The challenge is to correctly estimate scores using a massive and diverse crowd, particularly when only a small subset of evaluators has the expertise to give correct evaluations. Since averaging evaluations across all evaluators will result in an inaccurate crowd evaluation, this paper benchmarks a crowd consensus model that aims to identify experts such that their evaluations may be given more weight. Simulation results indicate this crowd consensus model outperforms averaging when it correctly identifies experts in the crowd, under the assumption that only experts have consistent evaluations. However, empirical results from a real human crowd indicate this assumption may not hold even on a simple engineering design evaluation task, as clusters of consistently wrong evaluators are shown to exist along with the cluster of experts. This suggests that both averaging evaluations and a crowd consensus model that relies only on evaluations may not be adequate for engineering design tasks, accordingly calling for further research into methods of finding experts within the crowd. | en_US |
dc.format | en_US | |
dc.language.iso | en | en_US |
dc.relation.ispartof | Journal of Mechanical Design, Transactions of the ASME | en_US |
dc.rights | © ASME | en_US |
dc.subject | Crowd consensus | en_US |
dc.subject | Crowdsourcing | en_US |
dc.subject | Design evaluation | en_US |
dc.subject | Evaluator expertise | en_US |
dc.title | When crowdsourcing fails: A study of expertise on crowdsourced design evaluation | en_US |
dc.type | Article | en_US |
dc.doi | 10.1115/1.4029065 | en_US |
dc.collaboration | University of Michigan | en_US |
dc.collaboration | TARDEC-NAC | en_US |
dc.collaboration | Cyprus University of Technology | en_US |
dc.subject.category | Electrical Engineering - Electronic Engineering - Information Engineering | en_US |
dc.journals | Subscription | en_US |
dc.country | Cyprus | en_US |
dc.country | United States | en_US |
dc.subject.field | Engineering and Technology | en_US |
dc.publication | Peer Reviewed | en_US |
dc.identifier.doi | 10.1115/1.4029065 | en_US |
dc.relation.issue | 3 | en_US |
dc.relation.volume | 137 | en_US |
cut.common.academicyear | 2015-2016 | en_US |
dc.identifier.epage | 9 | en_US |
item.grantfulltext | none | - |
item.languageiso639-1 | en | - |
item.cerifentitytype | Publications | - |
item.openairecristype | http://purl.org/coar/resource_type/c_6501 | - |
item.openairetype | article | - |
item.fulltext | No Fulltext | - |
crisitem.author.dept | Department of Mechanical Engineering and Materials Science and Engineering | - |
crisitem.author.faculty | Faculty of Engineering and Technology | - |
crisitem.author.parentorg | Faculty of Engineering and Technology | - |
Appears in Collections: | Άρθρα/Articles |
CORE Recommender
SCOPUSTM
Citations
49
checked on Nov 9, 2023
WEB OF SCIENCETM
Citations
42
Last Week
0
0
Last month
0
0
checked on Oct 29, 2023
Page view(s)
432
Last Week
0
0
Last month
8
8
checked on Nov 6, 2024
Google ScholarTM
Check
Altmetric
Items in KTISIS are protected by copyright, with all rights reserved, unless otherwise indicated.