Jae-Eun Lim

Jae-Eun Lim

Master’s student / Department of Communication

MS in Communication, Seoul National University (2013-)

BS in Consumer Science / Communication, Seoul National University (2009-2013)


Social Computing

  • Tracking assimilation in the online community
     

Crowdsourcing

  • Improvement of crowdsourced data quality
The concept of evaluation is considered as a quality management solution in crowdsourcing. The context or the nature of evaluation has not been sufficiently studied. As the preliminary study, we designed a 2×2 experiment with evaluation schemes and task complexity as variables to be conducted at Amazon’s Mechanical Turk (AMT). 120 workers participated and 100 answers were used after removing outliers. The experiment assessed workers on their completion of complicated or simple tasks under the possibility of an evaluation. We expected to see a decrease in work time and an increase in accuracy in the evaluation condition and simple task. T-test result showed that the work time was longer in workers under evaluation, but accuracy level did not improve. Differences of time and accuracy were not significant between tasks. This indicates that the possibility of evaluation did not improve the work speed or accuracy, and the complexity of the task had no significance. This finding suggests that the evaluations do not guarantee better quality.
    Jae-Eun Lim and Joonhwan Lee (2014). “They are going to look at you: the effect of evaluation on crowdsourcing quality at Amazon Mechanical Turk” (Under Review, ACM SIGCHI)