22 CHI 2020 papers worth reading
With CHI2020 officially cancelled this year, I wanted to take the time to highlight papers from the program that I thought were interesting. I’m a social computing researcher interested in health/social support, AI, and methodology, so that’s what you’ll find below! Loosely sorting into three sections, I highlight 8 papers in AI+HCI, 6 in Social Support & Health, and 8 in Methods & Miscellaneous. Ordering in each section is alphabetical by title, as copied from my CHI program reading list.
If I couldn’t find a pre-print link with <20 seconds of searching, you’re not on this list! I’m pretty sure that’s why AI work and industry papers are over-represented here. Be better, health researchers.
AI+HCI
“A Case for Humans-in-the-Loop: Decisions in the Presence of Erroneous Algorithmic Scores”. Maria De-Arteaga, Riccardo Fogliato, Alexandra Chouldechova.
Fascinating natural experiment with a real-world ML risk assessment system for predicting “adverse child welfare events” that was producing glitched risk scores. Humans were generally capable of identifying when the risk score was wrong and adapting their behavior accordingly while — critically — not completely disregarding the model prediction.
“A Human-Centered Review of the Algorithms used within the U.S. Child Welfare System”. Devansh Saxena, Karla Badillo-Urquiola, Pamela J. Wisniewski, Shion Guha.
Literature review identifying challenges and opportunities for developing human-centered algorithms — specifically in the U.S. child welfare system. Most algorithms are not theoretically derived, a shortfall that the authors suggest can be addressed with human-centered design methods.
“Co-Designing Checklists to Understand Organizational: Challenges and Opportunities around Fairness in AI”. Michael A. Madaio, Luke Stark, Jennifer Wortman Vaughan, Hanna Wallach.
Ethical principles for AI design can be hard to operationalize in practice; this paper explores AI fairness checklists as critical organizational infrastructure for facilitating the practice of designing fair AI systems.
“Factors Influencing Perceived Fairness in Algorithmic Decision-Making: Algorithm Outcomes, Development Procedures, and Individual Differences”. Ruotong Wang, F. Maxwell Harper, Haiyi Zhu.
Algorithm predictions hugely effect user perceptions of their fairness. Pure cronyism on my part: I’ve worked with and mentored the first author —Ruotong Wang — an incredibly talented undergrad (who is currently deciding between PhD programs).
“Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for Machine Learning”. Harmanpreet Kaur, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, Jennifer Wortman Vaughan.
Ah, ML interpretability tools. Who even knows how they work? Not the data scientists who use them, according to this paper. One implication: More ML/HCI design collaboration is needed to create interpretability tools that support deliberative reasoning!
“Keeping Community in the Loop: Understanding Wikipedia Stakeholder Values for Machine Learning-Based Systems”. C. Estelle Smith, Bowen Yu, Anjali Srivastava, Aaron Halfaker, Loren Terveen, Haiyi Zhu.
More cronyism! Great work from my GroupLens colleague Estelle Smith. This work, on understanding community values with respect to machine learning systems on Wikipedia, avoids the pitfall of being overly-specific to Wikipedia and lays a research agenda for community-driven ML system design.
“Local Decision Pitfalls in Interactive Machine Learning: An Investigation into Feature Selection in Sentiment Analysis”. Tongshuang Wu, Daniel S. Weld, and Jeffrey Heer.
Interactive tools for feature selection consistently lead to models with generalization issues. Uh oh! This troubling pattern motivates recommendations from the authors for Interactive Machine Learning research.
“Re-examining Whether, Why, and How Human-AI Interaction Is Uniquely Difficult to Design”. Qian Yang, Aaron Steinfeld, Carolyn Rosé, John Zimmerman.
What is actually difficult about human-AI interaction design? This paper reframes the conversation around designing for human-AI interaction and posits two key sources of design complexity: uncertainty about AI capabilities and the complexity of AI system outputs.
Social Support & Health
“Exploring the Effects of Technological Writing Assistance for Support Providers in Online Mental Health Community”. Zhenhui Peng, Qingyu Guo, Ka Wing Tsang, Xiaojuan Ma.
It’s incredibly challenging to write supportive comments even for loved ones; this paper explores automated writing assistance to facilitate the writing of highly supportive online comments. Interesting stuff.
“Integrating the Digital and the Traditional to Deliver Therapy for Depression: Lessons from a Pragmatic Study”. Katarzyna Stawarz, Chris Preist, Deborah Tallon, Laura Thomas, Katrina Turner, Nicola Wiles, David Kessler, Roz Shafran, David Coyle.
Great systems work: a new computerized platform for Cognitive Behavioral Therapy sessions. Interesting discussion of in-session communication mode and between-session “homework” for clients.
“‘Like Shock Absorbers’: Understanding the Human Infrastructures of Technology-Mediated Mental Health Support”. Sachin R. Pendse, Faisal M. Lalani, Munmun de Choudhury, Amit Sharma, Neha Kumar.
Detailed qualitative work probing the perspective of Indian mental health helpline volunteers. Interesting implications for designing technology-mediated mental health support systems that incorporate locality.
“Random, Messy, Funny, Raw: Finstas as Intimate Reconfigurations of Social Media”. Sijia Xiao, Danaë Metaxa, Joon Sung Park, Karrie Karahalios, Niloufar Salehi.
Intimate, reciprocal social support through the creation of “fake instas” for use with close friends. Many thought-provoking insights about platform reconfigurations and presentation of the online self.
“Understanding Client Support Strategies to Improve Clinical Outcomes in an Online Mental Health Intervention”. Prerna Chikersal, Danielle Belgrave, Gavin Doherty, Angel Enrique, Jorge E. Palacios, Derek Richards, Anja Thieme.
What types of supporter messages result in improved clinical health outcomes for patients undergoing an online Cognitive Behavioral Therapy intervention? An interesting dataset and an interesting selection of ML methods. Complements some of Koustuv Saha’s recent work e.g. 1, 2.
“Using Diaries to Probe the Illness Experiences of Adolescent Patients and Parental Caregivers”. Matthew K. Hong, Udaya Lakshmi, Kimberly Do, Sampath Prahalad, Thomas Olson, Rosa I. Arriaga, Lauren G. Wilcox.
Lovely qualitative diary probe, focusing on the shared and diverging experiences of adolescent patients in patient/caregiver dyads. Compelling line of thinking around designs to support patients in representing their own illness experiences.
Methods & Miscellaneous
“Agency of Autistic Children in Technology Research — A Critical Literature Review”. Katta Spiel, Christopher Frauenberger, Os Keyes, G. Fitzpatrick, Geraldine Fitzpatrick.
Explores technology design approaches for autistic children — with brief broader reflections on agency of design subjects.
“Bug or Feature? Covert Impairments to Human Computer Interaction”. John V. Monaco.
A fun look at “covert impairments” in technology design — intentional interaction anomalies to elicit or coerce user behavior. Articulates taxonomies for both unintentional and covert interaction impairments, and discusses ethical use of such impairments for e.g. defense against adversarial attacks.
“Classification of Functional Attention in Video Meetings”. Anastasia Kuzminykh, Sean Rintel.
More taxonomies! Develops a classification framework for thinking about attention during meetings. Uses the word “phenomenological” in the abstract.
“Detecting Gender Stereotypes: Lexicon vs. Supervised Learning Methods”. Jenna Cryan, Shiliang Tang, Xinyi Zhang, Miriam Metzger, Haitao Zheng, Ben Y. Zhao.
The authors tackle the sticky problem of quantifying gender stereotypes. They find lexicon-based approaches are trounced by supervised ML approaches, a pattern I’ve also observed in a different context. Interesting crowd-sourcing approach for validation.
“Divining Insights: Visual Analytics Through Cartomancy”. Andrew McNutt, Michael Correll, Anamaria Crisan.
Alt-chi! Tarot readings! A provocation — and brilliant critique of most AI dashboards. Excellent discussion starter, and alt-chi at its best.
“How Domain Experts Create Conceptual Diagrams and Implications for Tool Design”. Dor Ma’ayan, Wode Ni, Katherine Ye, Chinmay Kulkarni, Joshua Sunshine.
Qualitative exploration of the processes experts use to convert concepts into explanatory diagrams. Explores ideas for tooling to support vocabulary mapping to expert-defined abstractions.
“How Well Do People Report Time Spent on Facebook? An Evaluation of Established Survey Questions with Recommendations”. Sindhu Ernala, Moira Burke, Alex Leavitt, Nicole Ellison.
Interesting analysis of the [lack of] correspondence between self-reported Facebook time use and actual Facebook time use. Recommendations for self-report time-tracking surveys.
“‘I am just terrified of my future’ — Epistemic Violence in Disability Related Technology Research”. Anon Ymous, Katta Spiel, Os Keyes, Rua M. Williams, Judith Good, Eva Hornecker, Cynthia L. Bennett.
Alt-chi! Epistemic violence! Autobiographical researcher experiences of dehumanization! Critical reading for all HCI researchers.
Other CHI 2020 paper compilations
Other lists of CHI’20 papers I’ve come across:
- Ali Alkhatib’s CHI2020 reading list
- Meredith Ringel Morris’s “virtual reading group” on Twitter
- Casey Fiesler’s “intersection of people and technology” reading list
- Vera Liao’s “human-AI interaction” reading list
- Baki Kocaballi’s “conversational user interfaces” reading list
That’s a wrap! There are tons of great papers I didn’t talk about, so go explore the program yourself. Got other papers you think I’d like and may have missed? Have you written a CHI 2020 paper compilation of your own? Let me know!
A bonus “Best Figures” award goes to “Prior Setting in Practice” (Abhraneel Sarma and Matthew Kay):
Zachary Levonian is a PhD student in the GroupLens lab at the University of Minnesota. He has yet to submit to CHI (oops).