Forough Poursabzi-Sangdeh

Postdoctoral Researcher

Microsoft Research NYC

About Forough.

I am a Postdoctoral Researcher at Microsoft Research, New York City, an interdisciplinary research lab. I am interested in the interaction between humans and Machine Learning (ML) systems. From data used to train models to decisions made with the help of ML systems, humans are at the heart of ML. Therefore, it is important to study humans and ML systems jointly. I use my ML knowledge in training, debugging, and evaluating models and employ principles from Human—Computer Interaction (HCI) and behavioral psychology in designing controlled human-subject experiments to study how humans behave when they interact with ML systems. Recently, I have focused on studying this interaction in the context of interpretability and fairness. My long-term ambition is to leverage insights from these studies and create systems that foster an effective and responsible collaboration between humans and models. Take a look at my research statement to get a better sense of what I have been working on and what my research agenda is.

Before joining Microsoft, I got my PhD in computer science from the University of Colorado Boulder, where I was advised by Jordan Boyd-Graber. Before that, I got my BE in computer engineering from the University of Tehran.

Publications.

  • Expanding the scope of reproducibility research through data analysis replications
    Jake M. Hofman, Daniel G.Goldstein, Siddhartha Sen, Forough Poursabzi-Sangdeh.
    WWW workshop on Innovative Ideas in Data Science, 2020

  • A Human in the Loop is Not Enough: The Need for Human-Subject Experiments in Facial Recognition
    Forough Poursabzi-Sangdeh, Samira Samadi, Jennifer Wortman Vaughan, Hanna Wallach.
    CHI workshop on Human-Centered Approaches to Fair and Responsible AI, 2020

  • Manipulating and Measuring Model Interpretability
    Forough Poursabzi-Sangdeh, Daniel G. Goldstein, Jake M. Hofman, Jennifer Wortman Vaughan, Hanna Wallach.
    Revise and resubmit at Management Science (A shorter version appeared at the NIPS workshop on Transparent and Interpretable Machine Learning in Safety Critical Environments, 2017)

  • Attending to the Problem of Uncertainty in Current and Future Health Wearables
    Bran Knowles, Alison Smith-Renner, Forough Poursabzi-Sangdeh, Di Lu, Halimat Alabi.
    Communications of the ACM (CACM), 2018

  • Evaluating Visual Representations for Topic Understanding and Their Effects on Manually Generated Labels
    Alison Smith, Tak Yeon Lee, Forough Poursabzi-Sangdeh, Leah Findlater, Jordan Boyd-Graber, Niklas Elmqvist.
    Transactions of the Association for Computational Linguistics (TACL), 2017

  • ALTO: Active Learning with Topic Overviews for Speeding Label Induction and Document Labeling
    Forough Poursabzi-Sangdeh, Jordan Boyd-Graber, Leah Findlater, Kevin Seppi.
    Association for Computational Linguistics (ACL), 2016
  • (Code publicly available, now being used by Snagajob)

  • Human-Centered and Interactive: Expanding the Impact of Topic Models
    Alison Smith, Tak Yeon Lee, Forough Poursabzi-Sangdeh, Jordan Boyd-Graber, Niklas Elmqvist, Kevin Seppi, Leah Findlater.
    CHI wokrshop on Human-Centered Machine Learning, 2016

  • Computer-Assisted Content Analysis: Topic Models for Exploring Multiple Subjective Interpretations
    Jason Chuang, John D. Wilkerson, Rebecca Weiss, Dustin Tingley, Brandon M. Stewart, Margaret E. Roberts, Forough Poursabzi-Sangdeh, Justin Grimmer, Leah Findlater, Jordan Boyd-Graber, Jeffrey Heer.
    NIPS Workshop on Human-Propelled Machine Learning, 2014

  • On Clustering Heterogeneous Networks
    Forough Poursabzi-Sangdeh and Ananth Kalyanaraman.
    SIAM Workshop on Network Science, 2013

  • Design and Empirical Evaluation of Interactive and Interpretable Machine Learning
    Forough Poursabzi-Sangdeh
    University of Colorado Boulder, 2018
  • Contact

    forough DOT poursabzi AT microsoft DOT com