Toward Responsible AI In Mental Health: Centering Human Experience In Sociotechnical Design
- Posted by Jesse Polhemus
- on Jan. 12, 2026
by Corrie Pikul, Senior Communications Manager and Writer for the Life Sciences
Brown University doctoral student Zainab Iftikhar is the friend people turn to when they need to talk.
“My family jokes that I’m the ‘therapist friend’ everyone calls when they have a problem,” Iftikhar said. Her capacity for caregiving has informed her research at Brown, where she is focused on exploring technology’s therapeutic strengths and weaknesses to find ways people can best use AI to support social and mental health.
Her research has spotlighted humans’ inherent ability to offer and detect empathy, which is something that chatbots, text-based therapists and other artificial intelligence systems don’t do well, she said.
Understanding the strengths and limitations of AI chatbots in mental health is of great current interest, especially on campus, with some estimates putting the number of college students who have turned to chatbots for support at 50%. Brown University is playing an outsized role in studying this topic, with contributions by researchers in the medical school, behavioral sciences, public health, computer science and more. In addition, this summer Brown announced that it is leading a new federally funded institute, AI Research Institute on Interaction for AI Assistants (ARIA), with a focus on the use of trustworthy AI assistants in mental and behavioral health. Iftikhar was one of the first researchers to directly address this issue.
To read the rest of this story and explore the latest issue of Conduit, the annual Brown CS magazine, click here.