Please use this identifier to cite or link to this item:
|Title:||When crowdsourcing fails: A study of expertise on crowdsourced design evaluation||Authors:||Burnap, Alex
Papalambros, Panos Y.
|Issue Date:||1-Jan-2015||Publisher:||American Society of Mechanical Engineers||Source:||Journal of Mechanical Design, Transactions of the ASME, 2015, Volume 137, Issue 3, Article number 031101||Abstract:||Crowdsourced evaluation is a promising method of evaluating engineering design attributes that require human input. The challenge is to correctly estimate scores using a massive and diverse crowd, particularly when only a small subset of evaluators has the expertise to give correct evaluations. Since averaging evaluations across all evaluators will result in an inaccurate crowd evaluation, this paper benchmarks a crowd consensus model that aims to identify experts such that their evaluations may be given more weight. Simulation results indicate this crowd consensus model outperforms averaging when it correctly identifies experts in the crowd, under the assumption that only experts have consistent evaluations. However, empirical results from a real human crowd indicate this assumption may not hold even on a simple engineering design evaluation task, as clusters of consistently wrong evaluators are shown to exist along with the cluster of experts. This suggests that both averaging evaluations and a crowd consensus model that relies only on evaluations may not be adequate for engineering design tasks, accordingly calling for further research into methods of finding experts within the crowd.||URI:||http://ktisis.cut.ac.cy/handle/10488/9536||ISSN:||10500472||Rights:||© 2015 by ASME|
|Appears in Collections:||Άρθρα/Articles|
Show full item record
checked on Aug 22, 2017
Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.