Ambiguity online
Face-to-face, a huge amount of what we communicate is nonverbal — a pause, a glance, a shrug. Online, most of that channel disappears, and the little that remains gets compressed into things like read receipts, likes, profile changes, and emoji. These signals are easy to produce and easy to misread.
With So Yeon Park and Michael Shanks at Stanford, as part of the HPI-Stanford Design Thinking Research Program, we studied these “nonverbal online actions” — how people use them, how others interpret them, and where the gaps between sender and receiver sit DTR’21. A follow-up looked specifically at what happens when those actions create confusion, and what people do to repair it DTR’22.
A sharper version of the same problem is synthetic media. With Dilrukshi Gamage, Piyush Ghasiya, Vamshi Bonagiri, and Kazutoshi Sasahara, we analyzed Reddit conversations about deepfakes to see how people actually talk about them — the concerns they raise, the distinctions they draw, and the societal implications that surface in their own words CHI’22.
Across these projects, the pattern that keeps coming up is that online communication is less a thinner version of in-person communication than it is a different medium with its own ambiguities — and the interesting research question is usually how people navigate those ambiguities, not whether they exist.