Brown CS Blog

Multiple Brown CS Collaborators Win Two Best Paper Award Runner Up Honors At HCOMP 2015

None

The Conference on Human Computation and Crowdsourcing (HCOMP), held this year in San Diego, is one of the most prominent conferences on the subject of human cooperation, computation, and crowdsourcing, and Brown University's Department of Computer Science (Brown CS)  made a strong showing this year. Two different groups of students and faculty have been declared Best Paper Award Runner Up at the recent HCOMP 2015:

Tropel: Crowdsourcing Detectors with Minimal Training

Genevieve Patterson (Brown CS PhD student), Grant Van Horn (California Institute of Technology), Serge Belongie (Cornell University and Cornell Tech), and James Hays (former Brown CS faculty member) received the award for research ("Tropel: Crowdsourcing Detectors with Minimal Training") named after the word for "noisy crowd" in Spanish. Genevieve would also like to thank Ben Bauer (Brown CS undergraduate student), who contributed software to the project. "It was a big help and a pleasure to work with him," she says. Tropel is a system that enables non-technical users to create arbitrary visual detectors without first annotating a training set.  

"Our primary contribution," the researchers explain, " is a crowd active learning pipeline that is seeded with only a single positive example and an unlabeled set of training images. We examine the crowd’s ability to train visual detectors given severely limited training themselves. This paper presents a series of experiments that reveal the relationship between worker training, worker consensus and the average precision of detectors trained by crowd-in-the-loop active learning. In order to verify the efficacy of our system, we train detectors for bird species that work nearly as well as those trained on the exhaustively labeled CUB 200 dataset at significantly lower cost and with little effort from the end user. To further illustrate the usefulness of our pipeline, we demonstrate qualitative results on unlabeled datasets containing fashion images and street- level photographs of Paris." 

Anyone interested in using Tropel should click the link that follows to contact Genevieve.

Crowdsourcing from Scratch: A Pragmatic Experiment in Data Collection by Novice Requesters

Alexandra Papoutsaki, (Brown CS PhD student), and collaborators Hua Guo (Brown CS PhD student), Danae Metaxa-Kakavouli (Brown CS alum), Connor Gramazio (Brown CS PhD student), Jeff Rasley (Brown CS PhD student), Wenting Xie (Brown University alum), Guan Wang (Brown CS PhD student), and Jeff Huang (Brown CS Assistant Professor) received the award for research ("Crowdsourcing from Scratch: A Pragmatic Experiment in Data Collection by Novice Requesters" ) that became something of an Internet sensation, attracting more than 20,000 views in a single week and 100,000 unique visitors to date, including academics, researchers, and potential graduate students. It's the result of a class assignment from the CS2951-L HCI seminar as taught by Jeff in Spring 2014. All the other authors attended the class as students and were able to see an assignment turn into a peer-reviewed publication. For the undergraduates of the group (Danae and Wenting), it was their first in what will undoubtedly be a long line of publications.  

To determine how novice requesters design crowdsourcing tasks, the group conducted an experiment with a class of 19 students, each of whom tried their hand at crowdsourcing a real data collection task with a fixed budget and realistic time constraint. Students used Amazon Mechanical Turk to gather information about the academic careers of over 2,000 professors from 50 top Computer Science departments in America. In addition to curating this dataset, they classified the strategies which emerged, discussed design choices students made on task dimensions, and compared these novice strategies to best practices identified in crowdsourcing literature.

Their work culminates in a summary of design pitfalls and effective strategies observed to provide guidelines for novice requesters. The data is publicly accessible, and the researchers are still allowing the public to help improve its accuracy at http://jeffhuang.com/computer_science_professors.html. The Human-Computer Interaction (HCI) group will also continue working and expanding both the dataset (including more universities across North America) and the research contributions. You can read more about this project, Drafty, here