The difficulty of making good creative decisions constrains human productivity. My work focuses on how we might make decisions in the future, with help from computational systems and models of human behavior, improving speed and reliability. My recent work studies four areas:
To automate design, the nuances of design decisions need to be represented computationally. This is challenging because even humans often don’t agree on the decisions going into a design. In my PhD, I introduced an automatic, grammar-based system for structuring design information DCC’16, and showed how it could be applied to design problems AI EDAM.
Crowdsourcing tools today offer ad hoc and remote work arrangements at an unprecedented scale, yet many of the perks of traditional jobs have not yet been realized for this community. The Stanford Crowd Research Collective developed Daemo CSCW’17, a crowdsourcing platform aiming to mitigate many of these challenges. For example we introduced crowd guilds CSCW’17, to improve reputation and feedback within crowdsourcing, inspired by traditional worker guilds, Daemo Constitution CI’17, to give workers and requesters agency in platform governance, boomerang UIST’16, to incentivize honest feedback, and prototype tasks HCOMP’17, to fix tasks before they launch.
Using incentives to motivate human behavior is very effective, but poorly designed incentives often lead to worse behavior. For example, the common technique of requiring a certain number of characters of feedback in an online feedback form is more likely to lead to feedback padded with spaces than higher quality responses. With Dilrukshi Gamage, Thejan Rajapakshe, Haritha Thilakarathne, Indika Perera, and Shantha Fernando, we explored how aligning the incentives for peer feedback in online learning helps improve the perceived quality and length of the feedback L@S’17.
Humans are pretty good at chess, computers are better, humans and computers together have been even better. However, we are still not very good at getting the best out of this kind of working relationship. Mentoring undergraduate students at CMU and in the Pitt i3, we built systems to encourage humans and computers to work together. For example, we explored how socio-technical systems can provide novel insight into new product development opportunities, by performing computational analysis on Amazon product reviews iCONF’16, and we studied how hybrid tools can help researchers get started with crowdsourcing faster iCONF’18.