We consider a task assignment problem in crowdsourcing, which is aimed at collecting as many reliable labels as possible within a limited budget. A challenge in this scenario is how to cope with the diversity of tasks and the task-dependent reliability of workers; for example, a worker may be good at recognizing the names of sports teams but not be familiar with cosmetics brands. We refer to this practical setting as heterogeneous crowdsourcing. In this letter, we propose a contextual bandit formulation for task assignment in heterogeneous crowdsourcing that is able to deal with the exploration-exploitation trade-off in worker selection. We also theoretically investigate the regret bounds for the proposed method and demonstrate its practical usefulness experimentally.

You do not currently have access to this content.