About Lana Yarosh

Svetlana “Lana” Yarosh is an Assistant Professor in the Computer Science & Engineering Department at University of Minnesota. Her research in HCI focuses on embodied interaction in social computing systems. Lana is currently most proud of being one of the inaugural recipients of the NSF CRII award, of her best papers at CHI 2013 and CSWC 2014, and of receiving the Fran Allen IBM Fellowship. Lana has two Bachelors of Science from University of Maryland (in Computer Science and Psychology), a Ph.D. in Human-Centered Computing from Georgia Institute of Technology, and two years of industry research experience with AT&T Labs Research.

Tips on Recruiting Study Participants

As HCI work expands outside of traditional computing fields and as we seek to design for users who are not like us, recruiting participants can become a really time-consuming and difficult process. I wanted to share a few approaches that have worked for me in the past, in particularly focusing on recruiting in situations where you can’t just sucker undergrads or lab-mates into participating.

  • Do your formative work (such as participatory observation or interviews) with an organized group, if possible. You may have to do some general volunteer work with the group first. This helps establish your legitimacy within this group and you will be able to continue working with both old and new members in the future.
  • Even if you don’t think that your potential participants have a relevant formal organization, try searching meetup.com. There are groups for just about everything and they’re usually happy to have somebody who is interested in their issue as a speaker, so it’s easy to make a connection.
  • Post widely on public sites. I have recruited a lot of participants through craigslist, which has the benefit of being fairly local. If your study can be done without meeting in person, I also recommend posting to forums that are relevant to your topic of interest.
  • Ask widely in your social network to see if anybody can recommend a participant for a specific study. Facebook is actually quite good for this task, but I’ve also found that bringing it up with people face-to-face gets people to think about it harder. I like to do a lot of looking on my own first, so that I can say “I’m having a hard time finding participants. Here’s what I’ve done so far. Do you have any other ideas?”
  • “Snowball Sampling” is when you get a participant to recommend other possible participants. I find that this works great! My one tip for making it even better is asking the snowball question twice: once when I follow up with the participant reminding about our scheduled meeting and again after the study is complete. This gives them a chance to think about it a little bit.
  • Do compensate your participants reasonably for their time and transportation. I have found that it is possible to recruit participants for free, but they often have ulterior motivations for participating which may clash with your study’s needs.
  • If your study can be done in one session and without special equipment (e.g., an interview), take advantage of the times when you are travelling. Just through posts or meetup groups or connections to friends, I usually get an additional 2 or 3 participants when I visit another state. For some reason, just because I’m there for a limited time, people feel more excited about being in the study (“You came all the way to CA to talk to me?”).
  • Lastly, if you need a small number of participants with very specific characteristics, I’ve found that it is worth the money to go through a professional recruiting firm. When I was in Atlanta, I used Schlesinger Associates and I was very happy with the results. I also think that in the end, it led to better data than using friends-of-friends, because the participants didn’t feel a social need to be positive towards my system. But, it is expensive.

One thing that I haven’t tried yet, but could potentially be interesting is using TaskRabbit, which is a site for posting quick tasks and having people in your area do them for money (so, it would only work if you’re compensating participants). If anybody has tried it, I would love to hear about your experience.

How to Get Me to Positively Review Your CHI Paper

Getting a paper accepted into CHI can really be a pain if you’re just starting out. I’ve been there and I sympathize. I try to be positive and constructive in my reviews, but I frequently find myself pointing out the same issues over and over again. I’m going through 7 reviews for CHI right now and there is a lot of good stuff in there, but also a couple of papers making rookie mistakes that make it hard to give the “4” or “5” score. Obviously, other people might be looking for something else, but if you do all of the things below, I would really have no reason to reject your paper.

  • Introduction: keep it brief and to the point, but most importantly, don’t overreach in saying what you’ve done. If you say that your system actually helped kids learn something, I’m going to expect to see an evaluation that supports that claim and I’m not going to be able to accept a paper that only actually measured engagement or preference. So, frame your introduction in a way that sets the expectations right.
  • Related Work: give me an overview of what has been done by others, how your work builds on that, and why what you did is different. At the end of this section, I need to have a good idea of the open problem you are working on, gap you are addressing, or improvement you are making. I actually find it helpful to draft this section before I start the study. If somebody has already addressed the same problem, it’s nice to know about it before you start rather than when you’re writing up your results (or worse, from your reviewers). On a final unrelated note, if I see a paper that only references work from one lab or one conference, I get suspicious about it potentially missing a lot of relevant work. I rarely reject a paper based on that, but it makes me much more cautious when reading the rest and I sometimes do my own mini lit-review if I suspect that there is big stuff missing.
  • Methods: give me enough information to actually evaluate your study design, otherwise I have to assume the worst. For example, if you don’t say that you counterbalanced for order effects, I will assume that you didn’t. If you don’t say how you recruited participants, I will assume that they are all your friends from your lab. If you don’t say how you analyzed your qualitative data, I will assume that you just cherry-picked quotes. The rule of thumb is: can another researcher replicate the study from your description? I will never reject a paper for small mistakes (e.g., losing one participant’s video data, using a slightly inappropriate stat test, limitations of sampling, etc.) as long as it’s honest about what happened and how that affects the findings, but I have said “no” if I just can’t tell what the investigators did.
  • Results: I basically want to see that the results fulfill the promises made in the intro, contribute to the problem/gap outlined in the related work, and are reported in a way that is appropriate to the methods. I’m not looking for the results to be “surprising” (I know other CHI reviewers do, however), but I do expect them to be rigorously supported by the data you present. The only other note on the results section is that I’m not looking for a data dump, I probably don’t need to see the answer to every question you asked and every measure you collected — stick to the stuff that actually contributes to the argument you are making (both confirming and disconfirming) and the problem at hand.
  • Discussion: this is the section where I feel a lot of papers fall short. I won’t usually reject for this reason alone if the rest of the paper is solid, but a good discussion can lead me to checking that “best paper nomination” check box. Basically, tell me why the community should care about this work? If you have interesting implications for design, that’s good, but it’s not necessary and there’s nothing worse than implications for design that just restate the findings (e.g., finding: “Privacy is Important,” implication: “Consider Privacy”). When I look at an implication for design (as a designer), I want to have an idea of how I should apply your findings, not just that I should do so. But alternatively, I would like to hear about how your investigation contributes something to the ongoing threads of work within this community. Did you find out something new about privacy in this context that might be interesting or might be a new perspective of thinking about privacy in HCI? Does this work bring out some interesting new research directions and considerations (not just, “in the future, we will build this system”)? If you used an interesting/new/unusual method in your work, that could be another thing to reflect on in the discussion, because your approach could be useful to other investigators.

Okay, I have given away all of my reviewing secrets, because I don’t like rating good work down when the paper fails to present it with enough detail, consistency, or reflection. I hope that this is helpful to somebody out there! I say “yes” to a lot of papers already, but I’d like to be able to accept even more.

Review of the 3rd Annual New York Maker Faire


Play with giant tinker toys,
Try your hand at looming your own clothes,
Robots that don’t have to stay dry,
Robots that can learn to fly,
Robots that are made from trash,
Bicycles with wings that flash.

This weekend, I went to the World Maker Faire in Queens and I wanted to share some of the great things that I saw there. I’m not going to be able to talk about everything, but I wanted to cover two major trends that I saw that encompass several projects.

One trend that I think is interesting is the focus on making your own toys. With awesome tools like laser cutters, 3D printers, robotics toolkist, or even just scissors and tape, it is becoming more and more possible to make amazing toys without a manufacturer in the loop. Before Lego, the world was your construction toy. I think we can get back there soon, as things like laser cutters become more affordable. Online communities like letsmakerobots and aeroquad and real-life communities like hackert0wn are going to help this happen. While you wait for the hardware to get to your house, try this out — make you own marble maze roller coaster with nothing but a printer, scissors, and tape!

Biomodd uses the excess heat created by a computer to heat a small vegetable greenhouse.

The second trend that I thought was fascinating was the focus on connecting with the natural world. I often think of computing as an indoor activity that is clean and inorganic, but a number of inventors and artists at this event considered places where nature and technology intersect. As just a few examples, a project from T4D Lab uses low-cost sensors to help a grower understand and improve the production of food. Guerrilla Gardening is more a movement and community than a technology, but is a great example of leveraging technology to connect people who care about nature. But my favorite was probably Biomodd — a project focused on highlighting a symbiotic relationships between plants and computers. Cool stuff!

On a final note, his event exemplified my favorite thing about the Maker culture. It brought together artists, crafts-people, scientists, and engineers. In the process of breaking through disciplinary boundaries, it also broke through the boundaries and stereotypes of gender. It was great to see both girls and boys try their hand at making robots and weaving using a loom. I’ll definitely be coming back next year!

P.S. If I talked about your work here and you don’t think that it’s linked or discussed properly, let me know so that we can work together to make it right.