Hussein Mozannar
Hussein Mozannar
Home
Publications
Resources
Teaching
Light
Dark
Automatic
1
Impact of Large Language Model Assistance on Patients Reading Clinical Notes: A Mixed-Methods Study
In a mixed-methods investigation including a randomized survey and qualitative interviews with patients who previously had breast cancer, we found that large language model-based augmentations enhance the patient experience of reading clinical oncology notes.
Niklas Mannhardt
,
Elizabeth Bondi-Kelly
,
Barbara Lam
,
Chloe O'Connell
,
Mercy Asiedu
,
Hussein Mozannar
,
Monica Agrawal
,
Alejandro Buendia
,
Tatiana Urman
,
Irbaz B. Riaz
,
Catherine E. Ricciardi
,
Marzyeh Ghassemi
,
David Sontag
PDF
Cite
Reading Between the Lines: Modeling User Behavior and Costs in AI-Assisted Programming
We study how programmers interact with the AI code-recommendation system Copilot and develop a taxonomy of programmer activities.
Hussein Mozannar
,
Gagan Bansal
,
Adam Fourney
,
Eric Horvitz
PDF
Cite
Code
Effective Human-AI Teams via Learned Natural Language Rules and Onboarding
We introduce a novel method for teaching humans how to effectively collaborate with AI agents through natural language rules learned from data and evaluate on user studies.
Hussein Mozannar
,
Jimin J Lee
,
Dennis Wei
,
Prasanna Sattigeri
,
Subhro Das
,
David Sontag
PDF
Cite
Code
Video
Simulating Iterative Human-AI Interaction in Programming with LLMs
We build a simulation environment to mimic programmers writing code with LLMs and use it to evaluate the performance of different models and collect training data.
Hussein Mozannar
,
Valerie Chen
,
Dennis Wei
,
Prasanna Sattigeri
,
Manish Nagireddy
,
Subhro Das
,
Ameet Talwalkar
,
David Sontag
PDF
Cite
In Defense of Softmax Parametrization for Calibrated and Consistent Learning to Defer
We justify the use of softmax in learning to defer with probability estimation by studying the cause of invalid probability estimators and then propose an asymmetric softmax that can induce both consistent loss and a valid probability estimator for learning to defer.
Yuzhou Cao
,
Hussein Mozannar
,
Lei Feng
,
Hongxin Wei
,
Bo An
Cite
When to Show a Suggestion? Integrating Human Feedback in AI-Assisted Programming
We provide a method to decide when to display suggestions in AI-assisted programming systems based on the programmer’s feedback. We show that we can avoid displaying a significant fraction of suggestions that would have been rejected.
Hussein Mozannar
,
Gagan Bansal
,
Adam Fourney
,
Eric Horvitz
PDF
Cite
Code
Closing the Gap in High-Risk Pregnancy Care Using Machine Learning and Human-AI Collaboration
We build new machine learning algorithms that can predict whether a patient is pregnant and whether they will have a high-risk pregnancy. We then integrate these algorithms into a user interface and evaluate it with nurses.
Hussein Mozannar
,
Yuria Utsumi
,
Irene Y Chen
,
Stephanie S Gervasi
,
Michele Ewing
,
Aaron Smith-McLallen
,
David Sontag
PDF
Cite
Who Should Predict? Exact Algorithms For Learning to Defer to Humans
We provide algorithms that can provably minimize the learning to defer objective and provide an experimental benchmark to study human-deferral algorithms.
Hussein Mozannar
,
Hunter Lang
,
Dennis Wei
,
Prasanna Sattigeri
,
Subhro Das
,
David Sontag
PDF
Cite
Code
Video
Sample Efficient Learning of Predictors that Complement Humans
We characterize theoretically the gain from building ML classifiers that complement humans, show how do this by reducing any multiclass loss to a cost-sensitive loss and create human-label efficient algorithms based on active learning.
Mohammad-Amin Charusaie
,
Hussein Mozannar
,
David Sontag
,
Samira Samadi
PDF
Cite
Code
Slides
Video
Teaching Humans When To Defer to a Classifier via Exemplars
We develop an onboarding stage for teaching users when to rely on AI systems and when not to.
Hussein Mozannar
,
Arvind Satyanarayan
,
David Sontag
PDF
Cite
Code
Video
»
Cite
×