Brown CS Blog

Deep Learning Day Showcases Student Research Into Adversarial Networks, Sentiment Analysis, And More

    None

    Click the links that follow for more news about Daniel Ritchie and other recent accomplishments by Brown CS students.

    On December 12, 2019, the 340 students in Brown CS Professor Daniel Ritchie's CS 147 Deep Learning class turned their final project presentations into a full-day mini-conference. Held in Sayles Hall, Deep Learning Day was divided into four sessions, each of which featured projects from 113 presentation groups organized around a small set of themes. Students in CSCI 1470 presented posters during these sessions, and students in CSCI 2470 (the graduate-level version of the course) gave brief oral presentations.

    "Final projects have been a part of deep learning since the course was first offered," Daniel says, "but only for grad students and undergrads using the course as a capstone. My HTAs this past semester felt strongly that requiring everyone to do a project would be a good idea, and I agreed – it would require everyone to get comfortable with doing 'real-world' deep learning, without a clear-cut assignment spec or stencil code."

    Daniel explains that with a course as large as his that requires final projects for all, the previous format of scheduling individual outside-of-class presentations for each project group simply wouldn't scale.

    "Switching to the all-day format not only alleviated this issue," he says, "it also presented new opportunities. Students get a chance to see each other's work and to cheer their classmates on for the hard work they all did. Structuring the day like a conference, with poster sessions and oral presentations, just started to make sense: the students are doing the same kind of work as serious researchers, so why not treat them as such?"

    Some of the student projects included the following:

    Using Generative Adversarial Networks to Synthesize Images Depicting Traditional Mexican Crafts (Xiaotong Fu, Huakai Liu)

    Screen Shot 2020-03-11 at 10.00.13 AM.png

    This project generates images of traditional Mexican crafts based on training data gathered from students and faculties from RISD. The model uses a state-of-the-art Generative Adversarial Network (GAN) called “StyleGAN” to fabricate these images while still capturing the unique style of Mexican craftwork. Here we see some of the generated images. 

    OMG: Analyzing Sentiments of Tweets (Ao Wang, Emily Reed, Yunyun Yao, Pedro Freitas)

    Screen Shot 2020-03-11 at 10.00.27 AM.png

    These researchers analyzed the sentiment of tweets and classified them as positive, negative, or neutral. Their model uses Convolutional Neural Networks (CNNs), Long Short Term Memory Networks (LSTMs), and Attention mechanisms to process information coming from each tweet. Here we can see the usage of emojis in positive and negative tweets.

    Second Language Acquisition Modeling with Attention (Rafael Alberto Sanchez Rodriguez, Nihal Vivekanand Nayak, Juho Choi, Seungchan Kim)

    Screen Shot 2020-03-11 at 10.00.37 AM.png

    In this project, data from Duolingo, a language learning application, predicts when a user will forget a word while learning a new language. This model uses a Transformer Encoder and a Multi Layer Perceptron (MLP) Decoder to encode user data and meta data into a probability that the user will forget a given word. Here we can see a specific example of a user making an error that is used to train the model. This model can be used to assist language learners by targeting the words they are likely to forget!

    Medical Image Segmentation through Pruned Deep Neural Networks (Georgios Zerveas, Reza Esfandiarpoor)

    Screen Shot 2020-03-11 at 10.00.51 AM.png

    This project attempts to replicate the results of full-scale Melanoma detection networks with smaller-scale networks that can fit inside handheld scanning tools. Its network uses a U-Net CNN and cuts out unnecessary weights, yielding a smaller more space-efficient network. Here we can see how the pruned network compares to the full size network in what they predict is the Melanoma versus their proximity to the ground truth detection of Melanoma. 

    Ship Detection in Satellite Images using U-Net (Dong Xian Zou, Peng Chen, Zhoutao Lu, Wensi You)

    Screen Shot 2020-03-11 at 10.01.02 AM.png

    These researchers used satellite images to detect ships. Their network uses U-Net architecture and other classifiers to pick out parts of images that contain ships. Here we can see an original image versus a masked image only containing the ships in the original. 

    Pulse Discrimination of Cosmic Muon in Simple Scintillators (Jeanne Bang, Taeun Kwon)

    Screen Shot 2020-03-11 at 10.01.13 AM.png

    This project processes signals from cosmic rays to discriminate muons (a type of elementary particle) from background pulses. Its network uses Encoder/Decoder structures and clustering algorithms to output the probability of a muon being present in the signal. Here we can see the architecture of the model and how it turns signal input into output and trains on that output.  

    "Ultimately," Daniel tells us, "I think it was a success. In course evaluations, many students cited Deep Learning Day as their favorite part of the semester. We're definitely looking to build on this success for next year: giving students more time to work on the projects, and bringing in more external project proposals."

    If you have large text and/or image datasets that need analysis, Daniel says, or some other complex data-driven prediction problem you need to solve, consider proposing a project for next year's Deep Learning Day. You can reach him via email here or during his office hours, listed here.

    For more information, click the link that follows to contact Brown CS Communication Outreach Specialist Jesse C. Polhemus.