87% of People Got This Question about Their Door Lock Wrong!

You drive home and park. Your car is full of groceries and other shopping, which take many trips to bring into the house. Five minutes after you drove in, you are still making trips to the car. Is the door locked or unlocked?” What if I told you that 87% of people got this question wrong? Sensors and “smart” devices for your home may hold the promise of making life more convenient, but they may also make it harder to understand and predict things like the state of you “smart” door lock in common situations like the one above.

The main issue at hand is “feature interaction.” This is the idea that some of the features of your future smart home may want one thing (i.e., door locked for security), while others may want another (i.e., door unlocked for convenience). Software engineers who program future smart homes must come up with a clear set of rules for a device like a smart door lock so that it always behaves in clear and predictable ways. But, what is clear and predictable to a computer may not be clear and predictable to a person. My collaborator (Pamela Zave from AT&T Labs Research) and I found this out the hard way by running a study asking people to predict the state of their door lock in scenarios like the one above based on three rules applied to interaction between four features. None of the people in our study got all the questions right (and the one who got the closest was a lawyer!). See how you do by taking the 15-question quiz below:

How did it go? People in our study made some common mistakes that we describe in our paper. The bottom line is that “feature interaction” resolution rules that are simple for computers may require more effort for humans to understand. We think at the core of this may be a mismatch between logic and intuition. People intuit that an automated smart door lock should err on the side of keeping their door locked even in situations where it may be more convenient (and more similar to a regular non-smart door lock) to keep the door unlocked. It is important for researchers from multiple fields to work together to understand people’s intuitions and errors before programming future home systems, so that we won’t be left wondering whether our door is locked or unlocked!

Want more detail? Check out our CHI 2017 Publication.

Purpose, Visibility, and Intersubjectivity in Video-Mediated Communication Technologies

Video-mediated communication may be able to benefit from a number of novel technologies, but designing for a good experience requires considering purpose, visibility, and intersubjectivity for both partners.

Skype, Google Hangouts, Facetime, and ShareTable are all examples of real-time video-mediated communication technologies. Designing, implementing, and deploying novel systems of this sort is a big research priority for me and every semester I get a few entrepreneuring students approaching me with ideas for cool new technology to try in this space: robots, virtual reality, augmented reality, projector-camera systems, and more. Frequently, I ask them to consider a few things first and if you’re new to thinking about computer-mediated communication, these may be helpful for you as well (many of these ideas come from my work with play over videochat).

In this case, let’s assume the “base case” of two people—Alice and Bob—using a potential new technology to communicate with each other (though the questions below can definitely be expanded to consider multi-user interfaces). Consider:

  1. (Purpose) Why is Alice using this technology? Why is Bob? The answer should be specific (e.g., not just “to communicate,” but “to plan a surprise party for Eve together”) and may be different for the two parties. It’s good to come up with at least three such use cases for the next questions.
  2. (Visibility) What does Alice see using this technology? What does Bob see? Consider how Alice is represented in Bob’s space, how Alice can control her view (and then flip it and consider the same things for Bob). Consider if this appropriate for their purposes. For example, maybe Alice is wearing VR goggles and controlling a robot moving through Bob’s room. It’s cool that she can see 360 degree views and control her gaze direction, but what does Bob see? Does he see a robot with a screen that shows Alice’s face encased in VR goggles? Does this achieve level of visibility that is appropriate for their purpose?
  3. (Intersubjectivity) How does Alice/Bob show something the other person? How does Alice/Bob understand what the other person is seeing? The first important case to consider is how Alice/Bob bring attention to themselves and how they know if their partner is actually paying attention to them. If Alice is being projected onto a wall but the camera for the system is on a robot, it will likely be difficult for her to know when Bob is looking at her (i.e., when he’s looking at the wall display it will seem that he’s looking away from the camera). It’s also useful to consider the ability to refer to other objects. Using current videochat this is actually quite hard! If Alice points towards her screen to a book on the shelf behind Bob, Bob would have no idea where she’s pointing (other than generally behind him). Solving this is hard—it’s definitely an open problem in the field—but the technology should at least address it well enough to support the scenarios posed in question 1.

Generally, I find that new idea pitches tend to propose inventions that provide a reasonable experience for Alice but a poor one for Bob. It is important to consider purpose, visibility, and intersubjectivity experience for both of them in order to conceive a system that is actually compelling.

Availabowls and Other Uses of Bowl-and-Pebble Systems

I’ve always been kind of obsessed with interacting with abstract digital ideas using simple physical metaphors. One of my recent projects in this space has recently gotten some press coverage from Fast Company, so I thought that I’d share and offer a bit more detail.

A bowl with physical "pebbles" can be a more intuitive way of managing complex settings.

A bowl with physical “pebbles” can be a more intuitive way of managing complex settings.

The issue is that there are lot of complex settings with many facets that are a pain to manage by using checkboxes in a settings panel. I use privacy as an example in a system I call “Availabowls.” Currently, the granularity allowed for specifying availability is often a bit all-or-none, but there may be a lot of nuances in how you want to manage access to self. For example:

  • I may want to be seen as available to some people/groups and not others
  • I may want to be contactable via some technologies and not others
  • I may want to specify a level of busyness between “available” and “not available” and let the sender decide if they’re issue is important enough to disrupt me

It would be a nightmare to manage each of these settings in a control panel every time I come home. Literally, I have nightmares where I’m swarmed by radio boxes and checkbox panels! Instead, I imagine that a physical token can be used to represent a specific setting. For example, pebbles represent one unit of “busyness,” small blocks represent people or groups of people, plastic tokens represent communication media. Now, to set my privacy settings, I just have to transfer objects between a green bowl (on) and a red bowl (off) when I get home. There’s more about all this in the Fast Company article.

But really, I imagine that a physical pebble-and-bowl system of this sort could be useful for other complex settings. For example, pebbles may represent separate elements of my security system, making it easy to dis/arm everything but also easy to just leave specific elements disarmed (e.g., the back door while I’m having a family BBQ). Or they may represent different eco-friendly subroutines in the house such as ones that turn off lights in un-used rooms, control the house temperature, outdoor sprinklers, etc., again allowing me to only pick the things that I really want to have on right now.

Advantages? I think this is easier to deal with than checkbox panels. It’s easier to have a pebble-and-bowl system “live” where you would most likely be making these changes (e.g., by the front door) than a computer or a tablet. It doesn’t feel like a computer, so it won’t freak out grandma. It’s glanceable or even potentially eyes-free if each pebble has a unique shape.

Disadvantages? There’s only one physical pebble-and-bowl object and I see no elegant way to be able to sync states with this object if you wanted to make changes to your settings remotely or if you wanted to have multiples of these in a big house.

Is this something that appeals to others? Or am I unique in my checkbox-panel-phobia?