About Lana Yarosh

Svetlana “Lana” Yarosh is an Assistant Professor in the Computer Science & Engineering Department at University of Minnesota. Her research in HCI focuses on embodied interaction in social computing systems. Lana is currently most proud of being one of the inaugural recipients of the NSF CRII award, of her best papers at CHI 2013 and CSWC 2014, and of receiving the Fran Allen IBM Fellowship. Lana has two Bachelors of Science from University of Maryland (in Computer Science and Psychology), a Ph.D. in Human-Centered Computing from Georgia Institute of Technology, and two years of industry research experience with AT&T Labs Research.

SqueezeBands: Hugging Through the Screen

A woman raises her hand towards a webcam during a videochat with a friend. Her hand is encased in a cloth device with shape memory alloy springs.

Lucy and Jackie demonstrate using SqueezeBands to send a high five! The camera detects mutual gestures like this one and creates a sensation of touch by squeezing and heating each person’s hand band.

When I Skype with my family, I really wish that I could reach through the screen to give them a hug! Instead, we sometimes have to pretend—we lean forward “hugging” the monitor or bring our hands towards the camera to do a virtual “high five.” What if you could actually feel some of that touch instead of just having to imagine it?

Continue reading

87% of People Got This Question about Their Door Lock Wrong!

You drive home and park. Your car is full of groceries and other shopping, which take many trips to bring into the house. Five minutes after you drove in, you are still making trips to the car. Is the door locked or unlocked?” What if I told you that 87% of people got this question wrong? Sensors and “smart” devices for your home may hold the promise of making life more convenient, but they may also make it harder to understand and predict things like the state of you “smart” door lock in common situations like the one above.

The main issue at hand is “feature interaction.” This is the idea that some of the features of your future smart home may want one thing (i.e., door locked for security), while others may want another (i.e., door unlocked for convenience). Software engineers who program future smart homes must come up with a clear set of rules for a device like a smart door lock so that it always behaves in clear and predictable ways. But, what is clear and predictable to a computer may not be clear and predictable to a person. My collaborator (Pamela Zave from AT&T Labs Research) and I found this out the hard way by running a study asking people to predict the state of their door lock in scenarios like the one above based on three rules applied to interaction between four features. None of the people in our study got all the questions right (and the one who got the closest was a lawyer!). See how you do by taking the 15-question quiz below:

How did it go? People in our study made some common mistakes that we describe in our paper. The bottom line is that “feature interaction” resolution rules that are simple for computers may require more effort for humans to understand. We think at the core of this may be a mismatch between logic and intuition. People intuit that an automated smart door lock should err on the side of keeping their door locked even in situations where it may be more convenient (and more similar to a regular non-smart door lock) to keep the door unlocked. It is important for researchers from multiple fields to work together to understand people’s intuitions and errors before programming future home systems, so that we won’t be left wondering whether our door is locked or unlocked!

Want more detail? Check out our CHI 2017 Publication.