I am a junior at Rutgers University majoring in Computer Science and Cognitive Science. I’m really interested in research and debating questions about how minds work. Artificial intelligence research piques my interest. So, this summer, I joined the Humans to Robots Laboratory through the Brown Computer Science Artificial Intelligence & Computational Creativity Research Experience for Undergraduates (REU) Site. I felt so welcomed. Everyone at Brown, from the professors to the staff members to the grad students and undergrads, were very supportive. The program gave me a peek into the exciting life of a full-time researcher in computer science. Every day …
In February 2023, Brown CS faculty member Yu Cheng brought four teams of students to the International Collegiate Programming Contest (ICPC)’s Northeast North America (NENA) Regional Contest at the College of the Holy Cross site. Their story is told
here. Below, one of the students from the team that advanced to the national level shares his experience of that event.
Have you ever found yourself anxiously waiting for a response, unsure of what the other person is typing? The absence of immediate feedback, tone, facial expressions, and other non-verbal cues can sometimes hinder effective text-based communication. Hence, despite the convenient nature of messaging platforms, there has been a rising anxiousness around the ...
or Zainab is typing . This anxiousness can be attributed to the low richness associated with messaging.
In a forthcoming paper that has received the ACM ASIA Conference on Computer and Communications Security (AsiaCCS 2023)’s Distinguished Paper Award, researchers at Brown and collaborators at Aarno Labs, FORTH, and TUC developed a new system that can automatically protect against these native library threats while requiring minimal developer effort. The system, called BinWrap, combines protections of both the native portion of a library and its language-specific wrapper.
Microservices have been transforming the computing landscape with web-scale infrastructures like Facebook, Google, Amazon, and telecom infrastructures like AT&T and Ericsson adopting them. The microservices paradigm has proven to promote better scalability, fault tolerance, and deployability. However, it also significantly increases the space of configuration options and performance problems, rendering traditional approaches to management ineffective.
Deep Learning (DL) is a rapidly growing field that has found a set of wide-ranging applications across various industries, such as transportation, banking and finance, healthcare, and more. As the use of DL becomes more widespread, DL frameworks, such as TensorFlow and PyTorch, have, in turn, become increasingly popular, and are being used to build models that are applied even in security-critical settings. Thus, with their increasing popularity, the importance of keeping these frameworks secure has become crucial.
Last summer, I interned at Brown University’s Data Science Initiative (DSI) with Professor Ritambhara Singh. The work I was doing, in computational genetics, was fascinating. And the office space was modern and light-filled. But most importantly, the lab was filled with welcoming grad students and amazing professors who worked right next to me.
In 2022, generative AI went mainstream. Mere months ago, it still seemed exclusively the province of the ML research community and of Twitter, where meme accounts like @weirddalle shared the results of feeding text-to-image generation models off-the-wall prompts like "a bottle of ranch dressing testifying in court."
Cut to the present, where generative AI startups like Stability AI and Jasper are raising $100 million rounds of funding, big players like Microsoft have announced upcoming integrations of text-to-image generation, and perhaps all of us have used OpenAI's DALL-E 2 to create create something delightful like this "chicken nugget dressed as a …
The 30th European Symposium on Algorithms (ESA) last month awarded the ESA 2021 Test-of-Time Award to Brown CS Professor Sorin Istrail and his collaborators for their ESA 2001 paper called “SNPs Problems, Complexity, and Algorithms.” The award recognizes “excellent papers in algorithm research that were published … 19-21 years ago and which are still influential and stimulating for the field today,” according to the ESA.