Projects

🧩 Digital Traces as “Windows” into Mental Health

Highlight: Turning everyday digital footprints into life-saving insights for mental health.

Mental health challenges often go unnoticed until they become severe, in part because traditional systems rely on delayed self-reporting or access to care that many never receive. Yet today, much of our emotional life leaves digital traces β€” in what we search, post, and share online. Our lab develops computational and machine learning methods that transform these digital traces into early, non-invasive indicators of depression, anxiety, suicidality, psychosis, and other conditions. By bridging AI with psychiatry and public health, we aim to enable scalable and timely identification of risk, long before traditional systems might intervene. We turn everyday digital footprints into early signals for mental health, helping spot risks before they escalate. This work has been conducted in close collaboration with clinicians, psychiatric researchers, and public health agencies, including NIH-funded research partners, the CDC, and hospital systems (such as Northwell Health and the University of Pittsburgh Medical Center).

🌐 The Mental Health Impacts of Online Engagement

Highlight: Revealing how online communities can both heal and harm the mind.

As social media becomes a primary arena of social life, its influence on mental health is profound β€” sometimes nurturing connection and resilience, and at other times amplifying harm. For young people in particular, these digital environments can shape coping, recovery, or risk in ways that offline systems struggle to reach. Our research examines how online engagement β€” from supportive communities to harmful harassment and inaccurrate advice β€” shapes well-being, especially for youth and vulnerable populations. Through longitudinal studies and causal inference methods, we document the nuanced effects of online interactions, showing that digital environments can foster recovery and resilience while also exposing people to serious risks. We uncover how online interactions can heal or harm β€” from supportive communities to toxic interpersonal exchanges. These projects are conducted in partnership with pediatric hospitals (such as the Children’s Healthcare of Atlanta), social science scholars, and advocacy organizations.

βš–οΈ Safe and Ethical Design of Mental Health Technologies

Highlight: Building AI for care that is safe, fair, and grounded in human dignity.

Digital tools for mental health are proliferating, but too often they fall short β€” reproducing bias, eroding trust, or overlooking the needs of those most at risk. Without careful design, technologies meant to support well-being can instead cause harm. Our group studies the ethical, cultural, and psychometric challenges of computational approaches to mental health, and designs technologies that prioritize safety, transparency, and equity. This includes building privacy-preserving data donation infrastructures (e.g., the CANDOR portal), patient-facing tools for self-reflection, and clinician decision-support systems that align with real-world care. We design AI and digital tools for care that are safe, fair, and empowering β€” not stigmatizing or biased. This work is deeply interdisciplinary, involving ethicists, HCI researchers, patient advocates, and policymakers, and has informed guidance for bodies such as the World Health Organization, the National Academies, and advocacy groups.

πŸ€– AI, Generative Models, and the Future of Care

Highlight: Charting the risks and possibilities of AI as a new mediator of empathy and trust.

Generative AI is rapidly changing how people interact online, raising new questions about empathy, trust, and even what it means to receive care. If social media transformed how we connect to one another, AI now introduces an entirely new modality: machines that simulate conversation, advice, or support. Our lab is investigating what this means for mental health: how AI systems align (or misalign) with counseling competencies, how they amplify or combat clinically unverified health information, and how the simulation of empathy by machines impacts trust, loneliness, and stigma. We explore how emerging AI β€” from chatbots to generative models β€” is reshaping empathy, trust, and social connection in mental health. In this work, we partner with computational linguists, clinical psychologists, AI safety researchers, and policy stakeholders.