About Lana Yarosh

Svetlana “Lana” Yarosh is an HCI Researcher at AT&T Research Labs.

What I Do Explained Using 1,000 Most Common Words

I love xkcd and one of my favorites is this comic that tries to explain a complicated concept using only the “10 hundred” most common words in English! Even cooler, somebody built a tool to help others do the same thing! I thought I’d try it out with my own research and see how it goes. Here’s the result:

Some of my work is about helping parents and children talk when they do not live together. Some parents are not married or move around a lot for work, so they use the phone to talk to their kids. But phones are boring for kids, so they don’t want to use them. I love making new computer stuff that’s better than the phone, like games and fun stuff to do without needing to be in the same room.

Some of my other work is about helping kids play with each other even if they’re not in the same room. Instead of making games with fighting or killing where the story is already made up by a grown-up, I like making computer stuff and games where the kids have to make up their own stories. That’s better for kids because it helps them learn to be better friends and more interesting people.

I also like making computer stuff that helps grown-ups make good friends and become better people. One time when this is really important is when a person is trying to stop smoking, drinking, or using bad stuff. If they can’t stop on their own, that’s a type of being sick and they need friends who are going through the same thing who can help them. Sometimes they meet these friends in rooms where they tell each other their stories. Other times they meet these friends on the computer. I make computer stuff to help them make their get-better friends and get better together.

All in all, my work is about helping people be friends and become better together.

I thought that the exercise was quite helpful. I would love to hear your experiences and elevator pitch blurbs if you try it out to explain your own work!

Whitelist Chat as a Strategy to Protect Children Online

The ability to interact with other players is one of the most compelling aspects of online multiplayer games. However, in games for young children, there are obvious privacy and safety concerns in allowing unrestricted chatting. The state of the art solution, implemented in practically every online community for kids is “whitelist” chat (on some websites, combined with live monitoring). Whitelist chat means that only real dictionary words are allowed (bonus: helps spelling!) and certain real words (e.g., names of place and numbers) are excluded from this list. As an enthusiast of children’s online communities, I’ve been fascinated by the many ways that children have found to get around these restriction. In this blog post, I will use the children’s online game Petpet Park as an example, though honestly it could be one of any number of communities (Roblox, Club Penguin, etc.).

In Petpet Park, children play a creature who does quests around town, plays mini games, buys clothes and toys, and decorates their own little corner of the world. The website combines a strict set of whitelist restrictions and live monitoring to ensure safe chat. However, kids always find a way!

Petpet Park

A screenshot from Petpet Park. Children create creatures that can do quests, play games, and chat with each other.

petpet park taboo

Creative combination of real words and referring to cultural landmarks as a way of conveying real locations — a taboo topic on children’s websites.

Consider the public chat to the right (username removed for privacy). This person has found a creative way of combining real words (“train i dad” = Trinidad) and references to cultural landmarks (“cowboys” = USA, “place with that arch” = St. Louis) to discuss places of residence (a taboo topic on a children’s website). I want to be clear that I am not criticizing Petpet Park. In fact, I was impressed that this discussion was almost immediately shut down by a live monitor (booting the over-sharing child off the server), but this behavior is quite common and the damage could be done before a monitor steps in. Here are some common ways that I’ve seen children getting around the rules:

  • Typing a single letter on each chat line to spell out a forbidden word. (Not possible on all websites — Petpet Park does not allow this, for example)
  • Using the letter “i” to convey numbers. (e.g., “I am i i i i i i i years old”)
  • Spelling out a non-whitelisted word using first letters of allowed words. (e.g., “Read only first letters. My name is Like Amazing Nutty Almonds.”)
  • Using the meanings of allowed words to convey forbidden information. (e.g., “I go to the school that’s named after the guy that flew the kite.”)

The point that I want to make is that no online community is going to be 100% safe. In particular, the state of the art whitelist strategy is only effective when augmented with live monitoring (and even then, it may be too little too late). Safe chat is not a replacement for parental engagement and keeping open lines of communication about online rules. The other thing to remember is that no online filter will ever be able to enforce empathy and kind interaction online or be able to protect the child from being excluded or hurt by others. Both conversations should be an important part of raising digital citizens.

Designing Technology for Major Life Events Workshop

High emotional impact and the value of the journey are two big aspects of designing tech for major life events.

High emotional impact and the value of the journey are two big aspects of designing tech for major life events.

While at CHI, I got the wonderful opportunity to help organize the workshop on Designing Technology for Major Life Events along with Mike Massimi, Madeline Smith, and Jofish Kaye. We had a great group of HCI researchers with a diverse range of topics: gender transition, becoming a parent, dealing with a major diagnosis, bereavement, and more. My own interest in the topic grew from my experience designing technology for divorce and technology for recovery from addiction. In one of the breakout groups, we discussed the challenges of designing technology in this space and some of the ways we’ve dealt with these challenges in our work. In this post, I want to highlight a few of these:

Building Tech is Risky. Building a system requires the designer to commit to specific choices and it’s easy to find something that wasn’t adequately considered after the fact. In tech for major life events, this challenge can be exacerbated because the consequences of a failed design might have big emotional repercussions (e.g., tech messing up some aspect of a wedding). Sometimes, it is a big question of whether we even should try to bring tech into a given context.

Ethics of Limited Access. Building technology to support a major life event may mean excluding those without the financial means, skills, motivation, language, etc. to use the provided intervention. Additionally, we frequently stop supporting a prototype technology at the end of the study which can be really problematic if it was providing ongoing benefits to the participants. Again, because of the high stakes involved, issues of ethics of access to technology may be exacerbated when designing for major life events.

Tension Between Building Your Own and Leveraging Existing. Many systems we build require some critical mass of adoption before they are really useful. This is particularly important with tech for major life events because there may be relatively few people facing a particular relevant context at any point in time. One of the ways to deal with this is to piggyback on existing systems (e.g., building a Facebook app instead of a new SNS), but this may cause problems when the underlying technology makes changes outside of the researcher’s control (e.g., privacy policies change, APIs stop being supported, etc.).

Asking the Right Questions about the System You Built. The final challenge is understanding what kinds of questions to ask during the system evaluation. On one hand, it is important to go into the evaluation with some understanding of what it would mean for the system to be successful and the claims you hope to make about its use. On the other hand, it is valuable to be open to seeing and measuring unintended side effects and appropriations of the technology.

I think my two major take-aways from this discussion were a greater appreciation of how difficult it is to actually build something helpful in this space and the insight that many of these problems can be partially addressed by getting away for the type of study that focuses on evaluating a single system design using a small number of metrics. The risks of committing to a specific design solution can be mitigated by providing multiple versions of the intervention, either to be tested side-by-side or to let participants play around until they decide which solution is a better option for them. The ethics of access can be ameliorated by providing low-tech and no-tech means of achieving the same goals that your high-tech approach may support (e.g., Robin Brewer built a system to let the elderly check email using their landline phones). Planning for multiple solutions when building using others’ APIs can lead to a much more stable final system (e.g., the ShareTable we could easily switch from the Skype API to the TokBox API for the face-to-face video). And lastly, the problem of figuring out what to ask during and after a system deployment can be addressed by combining quantitative methods that measure specific predicted changes with qualitative methods of interviewing and observation that are more open to on-the-fly redirection during the course of the study. Overall, diversity of offered solutions, flexibility under the hood of your systems, and diversity of methods used in the evaluation lead to a stronger study and understanding of the target space.