ICML ’14 Workshop: Crowdsourcing and Human Computing


Workshop in conjunction with ICML 2014, Beijing.

Important Dates:

  • ICML Workshop, June 25 2014

  • Paper Submission Deadline: 20 April 2014, 11:59:00 PM PST

  • Author Notification: 25 April 2014, 11:59:00 PM PST

  • Room: Convention Hall 4.B

  • Poster session: Convention Hall 4.B

Overview

Human computation and crowdsourcing are emerging paradigms in computing which employ the power of human knowledge and expertise to help solve problems that machines cannot solve alone. Emergence of number of online marketplaces for crowdsourcing, including Amazon’s Mechanical Turk, ODesk and MobileWorks, has created numerous opportunities and enabled the research community to harvest the human intelligence easily. The adoption of crowdsourcing approaches is fast increasing with number of popular crowd-powered scientific projects including GalaxyZoo, eBird, and Foldit. How can we build systems that combine the intelligence of humans and the computing power of machines for solving challenging scientific and engineering problems? This is the fundamental question we plan to explore in this workshop, with the goal of improving the performance of complex human-powered systems by making them more efficient, robust, and scalable.

Current research in crowdsourcing systems has often focused on micro-tasking (for example, labeling a set of images) and designing algorithms or solving optimization problems from the job requester’s perspective. The workers are typically considered merely as simple entities or black-boxes. However, workers or users behind such systems are human agents with much more capability (such as to learn, collaborate etc.), which thus calls for building complex human-powered systems by putting special emphasis on the workers. Such human-powered systems could involve large numbers of human agents with varying expertise, skills, interests, and incentives to work, and this calls for building better models for such agents. We seek to develop principled methodologies that can improve the effectiveness of the workers through better training, incentives, engagement and collaboration.

This poses a lot of interesting research questions and exciting opportunities for the machine learning community. The goal of this workshop is to foster these ideas and work towards this goal by bringing together experts from the field of machine learning, cognitive science, economics, game theory, and human-computer interaction. Topics of interests in the workshop include (but are not limited to):

Training and learning. How can we use insights from machine learning to build tools for training and teaching the workers for carrying out complex or difficult tasks? How can this training be actively adapted based on the skills or expertise of the workers and by tracking the learning process?

Incentives and budget allocation. How to design the right incentive structure and pricing policies for workers that maximize the job requester’s utility for a given budget? How can machine learning and game-theoretic techniques be used to learn optimal pricing policies and to infer models of incentive structure (wages, bonus, etc.)?

Engagement and satisfaction. How to model workers’ interests and automatically assign tasks of interest for maximizing work satisfaction? How can AI and decision-theoretic techniques be used to track their progress and actively adapt the workflow of tasks that are given to workers in order to maximize their engagement with the tasks?

Social aspects and collaboration. How can workers or users collaborate and learn from each other? How can systems use social ties or interests inferred from social networking for improved matching of tasks to users’ interests? How can we design optimal human computing systems where users jointly work towards a common goal and benefit from each other’s work, say, a community sensing application for providing real time traffic maps?

Task decomposition and knowledge aggregation. How to decompose complex crowdsourcing tasks into simpler micro-tasks that are easier for workers? How can we design methodologies and use tools from machine learning and statistics to aggregate responses and knowledge from various workers, especially for complex tasks?

Organizers

Accepted Papers and Contributed Talks

Accepted Papers for Workshop

  • Hua Zhang, Evgueni Smirnov, Nikolay Nikolaev, Georgi Nalbantov, Ralf Peeters; An Ensemble Approach to Combining Expert Opinions.

  • Yuchen Zhang, Xi Chen, Dengyong Zhou, Michael I. Jordan; Provably Efficient Algorithm for Crowdsourcing.

  • Antti Kangasraasio, Dorota Glowacka, Tuukka Ruotsalo, Jaakko Peltonen, Manuel J. A. Eugster, Ksenia Konyushkova, Kumaripaba Athukorala, Ilkka Kosunen, Aki Reijonen, Petri Myllymaki, Giulio Jacucci, Samuel Kaski; Interactive Visualization of Search Intent for Exploratory Information Retrieval.

  • Nihar B. Shah, Dengyong Zhou; On the Impossibility of Convex Inference in Human Computation.

Contributed Talks from ICML'14 Papers

  • Naiyan Wang, Dit­-Yan Yeung; Ensemble­-Based Tracking: Aggregating Crowdsourced Structured Time Series Data.

  • Dengyong Zhou, Qiang Liu, John Platt, Christopher Meek; Aggregating Ordinal Labels from Crowds by Minimax Conditional Entropy.

  • Filipe Rodrigues, Francisco Pereira, Bernardete Ribeiro; Gaussian Process Classification and Active Learning with Multiple Annotators.

  • Adish Singla, Ilija Bogunovic, Gabor Bartok, Amin Karbasi, Andreas Krause; Near­-Optimally Teaching the Crowd to Classify.

  • Benjamin Drighes, Walid Krichene, Alexandre Bayen; On the Convergence of No­-regret Learning in Selfish Routing.

  • Yuan Zhou, Xi Chen, Jian Li; Optimal PAC Multiple Arm Identification with Applications to Crowdsourcing.

  • Nan Du, Yingyu Liang, Le Song, Maria Balcan; Influence Function Learning in Information Diffusion Networks.

  • Jinli Hu, Amos Storkey; Multi­-period Trading Prediction Markets with Connections to Machine Learning.

Invited Speakers

  • David C. Parkes. Professor, Harvard University.

  • Peng Dai. Google Research, Mountain View.

  • Eric Horvitz (tentative). Distinguished Scientist & Managing Director, Microsoft Research, Redmond.

  • Zachary A. Pardos. (tentative). Professor, University of California, Berkeley.

Call for Papers

Submissions should follow the ICML 2014 or ICML 2013 format and are encouraged to be up to eight pages, excluding references. Additional appendices and supporting materials are allowed. Papers submitted for review do not need to be anonymized. There will be no official proceedings, but the accepted papers will be made available on the workshop website. Accepted papers will be either presented as a talk or poster. We welcome submissions both on novel research work as well as extended abstracts on work recently published or under review in another conference or journal (please state the venue of publication in the latter case); we particularly encourage submission of visionary position papers on the emerging trends on the field.

Please submit papers in PDF format here.

For any questions, please email icml2014crowd AT gmail.

Related Workshops, Conferences and Resources