About Lana Yarosh

Svetlana “Lana” Yarosh is an Assistant Professor in the Computer Science & Engineering Department at University of Minnesota. Her research in HCI focuses on embodied interaction in social computing systems. Lana is currently most proud of being one of the inaugural recipients of the NSF CRII award, of her best papers at CHI 2013 and CSWC 2014, and of receiving the Fran Allen IBM Fellowship. Lana has two Bachelors of Science from University of Maryland (in Computer Science and Psychology), a Ph.D. in Human-Centered Computing from Georgia Institute of Technology, and two years of industry research experience with AT&T Labs Research.

Why work with AA and NA?

Aren’t you encouraging people to join a cult? somebody asked me at a conference. A lot of my recent research focuses on supporting people in recovery from Substance Use Disorders (SUDs – colloquially known as ‘alcoholism’ or ‘addiction’). Several of my projects in this space have looked at the role of technology in supporting engagement with 12-step groups, like Alcoholics Anonymous (AA) and Narcotics Anonymous (NA). I get a lot of flak and questions about this — sometimes researchers from other areas actively encourage me to drop this line of research. Many point to The Atlantic article on the topic of Alcoholics Anonymous, which positions 12-steps programs somewhere between a scam and a cult. I have this conversation so often that I thought I would write a blog post about it answering some of the questions that I frequently get.

[12-Step Programs Are for Recovery Maintenance].Are 12-step programs effective treatment for SUDs?” No, 12-step programs are not a medical treatment or a detox plan! However, most people with substance use disorders cannot just detox, spend a month or two in treatment and then expect to stay clean for the rest of their lives. Some sort of a maintenance program is necessary to avoid relapse and 12-steps is one example. Recovery maintenance may last for the rest of a person’s life. While not detox or treatment, 12-step program provide critical social support during and after treatment. Supporting maintenance is a particularly good entry point for technology because most of a recovering person’s program will be spent in this largely self-directed and unguided maintenance stage — scalable solutions are needed. There are many great opportunities for making technology that supports communication with a support network and behavior tracking/change, both of which are active research areas in HCI.

[12-Step Programs Help People Recover]. “Okay, do 12-steps work as a ‘maintenance program’?” Since 12-step programs are NOT treatment, we should not be comparing their effectiveness to treatment, but rather seeing whether people using 12-step programs as maintenance are more likely to stay in recovery than those who are not. It is incredibly difficult to investigate causality here because of cross-over effects (e.g., you can’t do an RCT assigning somebody NOT to go to AA — some people from the control group will still go). However, there are statistical ways of controlling for this. After controlling for crossover, the bottom line is that people who attend more meeting have more days of abstinence. There’s a really good video from Healthcare Triage that goes into the details of one study in this space and it’s findings here:

But, it is not at ALL surprising that a social support group would help people achieve behavior change! Peer support groups are also common for other behavior change, for example Weight Watchers meetings or the step-count competitions hosted by your FitBit app. This is why the NIH includes 12-step facilitation (encouraging people to go to meetings) as an important part of treatment programs. 12-steps are still part of best practice in helping people find a path to recovery.

[12-Step Programs Are Where People Are]. “But, why a 12-step maintenance program, why not something more science-based like SmartRecovery?” It would be so awesome if there were more social support alternatives and maintenance programs available to people in recovery. But there are only six SmartRecovery meetings listed for the entire state of Minnesota — only one in Minneapolis. Compare this to the hundreds of 12-step meetings (e.g., AA, NA) in the Twin Cities area. Working with people in recovery for the last five years, I have found that most people do leverage 12-step programs to some extent. For example, even when trying for a program-agnostic approach in a participatory design study with women in a sober home, many of their idea for supporting their own recovery connected to the practices of 12-step groups such as meeting attendance, service, and sponsorship. As a human-centered technologist, I believe in meeting people where they are. Right now, they are in 12-step groups and I’m not going to ignore that just because the field is also exploring other approaches. The cool thing is that many of the technical opportunities for supporting a variety of programs including 12-Step, SmartRecovery, Lifering, Women in Recovery, etc. are actually quite similar: all need help with grassroots organizing of meetings, helping people find meetings, building a strong support network, and leveraging online resources to offer meetings to more people (e.g., InTheRooms.org hosts all sorts of these meetings on the same site). If we can help solve these general problems, all social support recovery maintenance problems can benefit.

[12-Step Programs Are Grassroots]. “Okay, but even if it works for SOME people, some 12-step meetings are just terrible [insert personal story], how can you be part of that?” If you or your family member had a negative experience, I am really sorry to hear that! Anybody can start a 12-step meeting and each meeting has a different “flavor” based on the personalities of the people that attend. Some people may have a negative experience in a meeting (e.g., hear a very religious take on the program or hear somebody share a negative opinion about medically-assisted recovery) and are then turned off these programs for life! But, these opinions are not inherent parts of 12-step programs or traditions (and in fact, run counter to the principles of 12-step programs as stated in their literature). The mixed feelings that people have about how some 12-step groups run their meetings is exactly WHY we should engage to help people find better options. By providing a larger “marketplace” of available meetings, we can allow people to find groups that work for them. This is why a lot of our work focuses on helping people find new meetings, both in-person and online.

The Atlantic article was written as a response to people saying that AA/NA is the best or the only way to find recovery! That is not what I’m saying at all. Personally, I totally think that the Computer Science community should ALSO explore approaches that focus on preventing SUDs (e.g., VR for reducing perceptions of pain to reduce prescription opioids), harm reduction (e.g., apps for finding needle exchanges, locating nearby people with Naloxone to save somebody who is overdosing), medically-assisted treatment (e.g., data science to better understand effects of MATs), and cognitive behavior therapy approaches (e.g., CBT worksheet apps, thought reframing tools). We should be doing all of these things too (and, in fact my lab is involved in several of these initiative). There is so much work here and so much potential to do good! But, I am not going to brush off and ignore a powerful tool in the toolbox of a recovering person.

So, yes, my lab does not explicitly reject approaches that connect with 12-step groups. I’m not going to stop engaging with 12-step groups until more participants in our studies find alternatives that work for them. I am going to continue to build technology that helps people find recovery, whatever path they choose, without judgement.

Giving Back to the Participants: When You Can’t Leave a Working System

My research depends on participants volunteering their time, energy, and personal stories. They are informants, co-designers, beta testers, and content experts who contribute so many insights and yet frequently remain anonymous in publications resulting from the work. Like many other researchers, I struggle with how I can give something back (beyond free pizza or gift cards for participation.

One of my favorite readings on the topic of “giving back” is the Le Dantec & Fox CSCW 2015 paper on community-based research, which really examined how institutions may exploit vulnerable communities (despite having the best intentions) without actually contributing back anything that is of value to the participants. University research may appear to promise things like policy and institutional change, increased access to resources, or (in the case of HCI) working technologies that address user needs. The truth is that these are almost never delivered in the form or at a time that can actually directly benefit study participants. I know for my research, I have no source of funding that would support developing and maintaining a production-quality system beyond the initial novel prototypes in constrained deployments. So is there a way to give back that doesn’t over-promise? Here are five strategies that I’ve tried in my work:

PARTNERING with an ORGANIZATION (which is willing to develop and maintain prototype ideas): If your research is done in collaboration with an existing technology company, your findings and prototypes can be directly integrated into their services without the researcher needing to coordinate long-term maintenance (in turn, the company gets a free research team and initial development of cool feature ideas). This is currently our approach in our collaboration with CaringBridge.org, as our findings  are directly driving new features and directions (this work is largely in progress, but here are some publications). Obviously, these partnerships can take a lot of time and effort to set up and manage, but I really think this is a great pathway to positive impact.

STUDY PARTICIPATION as BENEFIT: In many cases, being part of a participatory design process can tangibly benefit the co-designers. For some good reading, I love the way the KidsTeam at University of Maryland has reflected on the perceived benefits of and ethical considerations for co-design with children. It is certainly important to think about the possible benefits up-front and consider how they can be amplified and measured. If the benefits are pedagogical, this may mean thinking through the specific learning outcomes holistically and for each session. When planning my studies, I consider how being part of it may help participants identify new resources that may personally help them, develop actionable resilience skills, etc. This does typically mean that I can’t accomplish my studies as a single design workshop, instead requiring multiple sessions for the partnership to be able to yield benefits both to the researchers and to the participants.

LEAVING BEHIND STOP-GAP SOLUTIONS (with currently-available technologies): While you can’t usually leave behind and maintain a fully functional prototype system, there may be great opportunities to leave behind solutions that may be better than what the family had available in the past. For example, after running a study with a novel prototype that I had to remove, I left behind dedicated smartphones set up for easy videochat (just off-the-shelf phones with Skype installed). This wasn’t as good as my prototype, but it was much better than the previous audio-only setup that the family used. This did require additional resources (in this case, phones), but it’s worth it if it leaves the participants better off than before you got there.

AUTHORSHIP or ACKNOWLEDGEMENT: This one doesn’t apply if you’re working with communities that face stigma or want to remain anonymous, but there are cases where it may be ethical to credit creators or acknowledge co-designers. We are currently working on a project with Middle School co-designers, where all contributors will be able to choose to be listed as co-authors. After all, they spent a whole year with us learning how to be designers and coming up with ideas — they should get something that they can put on their resumes for college! This does require thinking through at study-planning time so that a special check box can be included on the consent form.

DISSEMINATING INFORMATION on BEST PRACTICES (with currently-available technologies): After doing a formative study, you may be in a really good position to share best practices for specific challenges. For example, after completing my research on parent-child communication in separated families, I wrote a blog post on practices, strategies, and tools that may help families make best use of currently-available systems. I distributed this to my participants directly, but also left it online for any other families who may be looking for ideas or advice. In another example, I led a webinar for medical practitioners to understand the technologies and practices that work well for people in recovery. Since the medical practitioners are often the first “line of defense” for people in recovery, this is a great way to disseminate potentially-helpful information. The good news is that this approach is also really consistent with and rewarded by the NSF expectations for Broader Impact.

Are there practices that you’ve tried in your research that have helped you give back to your participants? I’d love to hear other ideas, because I really think this is a fundamental tension and challenge in HCI research.

Note: This blog post started as a conversation at the CSCW 2018 Workshop on “Conducting Research with Stigmatized Populations.” Stigmatized participants have greater risks from participating in research, so thinking about ways of amplifying benefit using strategies above is particularly critical. However, there is no reason why we can’t think about amplifying benefit for all populations!

Of Children and Artificial Intelligence Agents

Hey Google, tell me a joke!” If you share your home with a child and any sort of a voice assistant (Google Home, Kortana, Alexa, Siri), you may be used to hearing this sort of a request. This request typically yields a groan-worthy pun, but alternative joke requests like “Do you know any good ones?,” “Knock, knock,” or “Make me laugh,” may receive the reply “I don’t know how to help with that.” It’s a shame that making artificial intelligence agents (like voice assistants) understand what you want can be so hard, because voice assistants may help children remain curious about their world and help them find answers to questions without needing to first learn spelling and typing. To find out how we can help kids use voice assistants, we asked 87 children (ages 5 – 12) to try out three prototype voice assistants at the Minnesota State Fair research facility and tell us what they thought.

Child points at a speacher while a researcher watches him.

A young child identifying his favorite interface during the study and describing why he likes it.

First, we wanted to know whether it mattered how the prototype AI agent talked about itself and the child. Personified agents referred to themselves with a name and used the “I” pronoun (similar to Alexa), non-personified ones just asked for a question to answer (similar to Google search in your browser). Personalized agents referred to the child by name, but non-personalized ones didn’t know anything about the child. Turns out that children had a strong preference towards personified interfaces, but didn’t really care if the interface knew anything about them. In fact, some kids found it “creepy” if they agent knew their name and age!

Second, we wanted to know how children reacted when the voice assistant had trouble figuring out what they meant. We asked each child to puzzle out an answer to a question about the State Fair. To get the voice assistant to understand the question, they would have to change how the question is worded or divide the question into multiple parts. This was hard for kids! Most kids just tried to say the question louder or substitute synonyms for specific words. It took them a few tries to find a strategy that worked (and many younger kids couldn’t do it without a grown-up helping). The problem is that current voice assistants don’t provide a lot of clues about why they’re having trouble understanding a request. Thanks to the help of the kids who were in our study, we were able to come up with lots of ideas about how voice assistants can be better and more useful to children and families.

There are a lot more findings in our paper, which has recently received the Best Paper Award at IDC 2018. This work was made possible with funding from Mozilla Research Grants and Google Faculty Research Awards.