Tips on Recruiting Study Participants

As HCI work expands outside of traditional computing fields and as we seek to design for users who are not like us, recruiting participants can become a really time-consuming and difficult process. I wanted to share a few approaches that have worked for me in the past, in particularly focusing on recruiting in situations where you can’t just sucker undergrads or lab-mates into participating.

  • Do your formative work (such as participatory observation or interviews) with an organized group, if possible. You may have to do some general volunteer work with the group first. This helps establish your legitimacy within this group and you will be able to continue working with both old and new members in the future.
  • Even if you don’t think that your potential participants have a relevant formal organization, try searching meetup.com. There are groups for just about everything and they’re usually happy to have somebody who is interested in their issue as a speaker, so it’s easy to make a connection.
  • Post widely on public sites. I have recruited a lot of participants through craigslist, which has the benefit of being fairly local. If your study can be done without meeting in person, I also recommend posting to forums that are relevant to your topic of interest.
  • Ask widely in your social network to see if anybody can recommend a participant for a specific study. Facebook is actually quite good for this task, but I’ve also found that bringing it up with people face-to-face gets people to think about it harder. I like to do a lot of looking on my own first, so that I can say “I’m having a hard time finding participants. Here’s what I’ve done so far. Do you have any other ideas?”
  • “Snowball Sampling” is when you get a participant to recommend other possible participants. I find that this works great! My one tip for making it even better is asking the snowball question twice: once when I follow up with the participant reminding about our scheduled meeting and again after the study is complete. This gives them a chance to think about it a little bit.
  • Do compensate your participants reasonably for their time and transportation. I have found that it is possible to recruit participants for free, but they often have ulterior motivations for participating which may clash with your study’s needs.
  • If your study can be done in one session and without special equipment (e.g., an interview), take advantage of the times when you are travelling. Just through posts or meetup groups or connections to friends, I usually get an additional 2 or 3 participants when I visit another state. For some reason, just because I’m there for a limited time, people feel more excited about being in the study (“You came all the way to CA to talk to me?”).
  • Lastly, if you need a small number of participants with very specific characteristics, I’ve found that it is worth the money to go through a professional recruiting firm. When I was in Atlanta, I used Schlesinger Associates and I was very happy with the results. I also think that in the end, it led to better data than using friends-of-friends, because the participants didn’t feel a social need to be positive towards my system. But, it is expensive.

One thing that I haven’t tried yet, but could potentially be interesting is using TaskRabbit, which is a site for posting quick tasks and having people in your area do them for money (so, it would only work if you’re compensating participants). If anybody has tried it, I would love to hear about your experience.

How to Get Me to Positively Review Your CHI Paper

Getting a paper accepted into CHI can really be a pain if you’re just starting out. I’ve been there and I sympathize. I try to be positive and constructive in my reviews, but I frequently find myself pointing out the same issues over and over again. I’m going through 7 reviews for CHI right now and there is a lot of good stuff in there, but also a couple of papers making rookie mistakes that make it hard to give the “4” or “5” score. Obviously, other people might be looking for something else, but if you do all of the things below, I would really have no reason to reject your paper.

  • Introduction: keep it brief and to the point, but most importantly, don’t overreach in saying what you’ve done. If you say that your system actually helped kids learn something, I’m going to expect to see an evaluation that supports that claim and I’m not going to be able to accept a paper that only actually measured engagement or preference. So, frame your introduction in a way that sets the expectations right.
  • Related Work: give me an overview of what has been done by others, how your work builds on that, and why what you did is different. At the end of this section, I need to have a good idea of the open problem you are working on, gap you are addressing, or improvement you are making. I actually find it helpful to draft this section before I start the study. If somebody has already addressed the same problem, it’s nice to know about it before you start rather than when you’re writing up your results (or worse, from your reviewers). On a final unrelated note, if I see a paper that only references work from one lab or one conference, I get suspicious about it potentially missing a lot of relevant work. I rarely reject a paper based on that, but it makes me much more cautious when reading the rest and I sometimes do my own mini lit-review if I suspect that there is big stuff missing.
  • Methods: give me enough information to actually evaluate your study design, otherwise I have to assume the worst. For example, if you don’t say that you counterbalanced for order effects, I will assume that you didn’t. If you don’t say how you recruited participants, I will assume that they are all your friends from your lab. If you don’t say how you analyzed your qualitative data, I will assume that you just cherry-picked quotes. The rule of thumb is: can another researcher replicate the study from your description? I will never reject a paper for small mistakes (e.g., losing one participant’s video data, using a slightly inappropriate stat test, limitations of sampling, etc.) as long as it’s honest about what happened and how that affects the findings, but I have said “no” if I just can’t tell what the investigators did.
  • Results: I basically want to see that the results fulfill the promises made in the intro, contribute to the problem/gap outlined in the related work, and are reported in a way that is appropriate to the methods. I’m not looking for the results to be “surprising” (I know other CHI reviewers do, however), but I do expect them to be rigorously supported by the data you present. The only other note on the results section is that I’m not looking for a data dump, I probably don’t need to see the answer to every question you asked and every measure you collected — stick to the stuff that actually contributes to the argument you are making (both confirming and disconfirming) and the problem at hand.
  • Discussion: this is the section where I feel a lot of papers fall short. I won’t usually reject for this reason alone if the rest of the paper is solid, but a good discussion can lead me to checking that “best paper nomination” check box. Basically, tell me why the community should care about this work? If you have interesting implications for design, that’s good, but it’s not necessary and there’s nothing worse than implications for design that just restate the findings (e.g., finding: “Privacy is Important,” implication: “Consider Privacy”). When I look at an implication for design (as a designer), I want to have an idea of how I should apply your findings, not just that I should do so. But alternatively, I would like to hear about how your investigation contributes something to the ongoing threads of work within this community. Did you find out something new about privacy in this context that might be interesting or might be a new perspective of thinking about privacy in HCI? Does this work bring out some interesting new research directions and considerations (not just, “in the future, we will build this system”)? If you used an interesting/new/unusual method in your work, that could be another thing to reflect on in the discussion, because your approach could be useful to other investigators.

Okay, I have given away all of my reviewing secrets, because I don’t like rating good work down when the paper fails to present it with enough detail, consistency, or reflection. I hope that this is helpful to somebody out there! I say “yes” to a lot of papers already, but I’d like to be able to accept even more.

What next, Ubicomp?

Gregory Abowd was my Ph.D. advisor at Georgia Tech. Those of your who know him will not be surprised that he is sharing his opinions loudly and looking to start a debate with others in the community. His vision paper this year has provided an interesting look back (and forward) at the Ubicomp conference and he asks us “What next, Ubicomp?”

You can read the whole paper and join in the Facebook discussion, but here are the main points of his paper, for the lazy:

  • Ubicomp (the paradigm) is so accepted as part of all computing that it is no longer a meaningful way to categorize computing research
  • Ubicomp (the conference) has many successes to celebrate, including: popularizing “living labs” style investigations, the “your noise is my signal” intellectual nugget, and bringing together researchers from diverse disciplines
  • Ubicomp (the conference) values both “application agnostic” novel technologies and “application driven” investigations of established technologies. And that’s good!
  • Ubicomp (the paradigm) embodies the “3rd generation” of computing. The next generation may bring a blurring of the lines between the human and the computer through cloud, crowd, nano, and wearable technologies.

During the presentation, he was also a bit incendiary to generate discussion, saying that the bad news is that: (1) defining Ubicomp at the 3rd generation means the generation might be over and we have to move on, (2) a lot of people don’t think of submitting their relevant work there, but rather put it in other places, (3) Ubicomp as a research area is dead because ubiquitous computing is now, in fact, ubiquitous. The paper generated quite a bit of discussion at the conference. I want to add my two cents and (hopefully) put this idea out to a wider audience.

The main point that I want to make is that there is a distinction between Ubicomp the conference and Ubicomp the paradigm. I make these distinctions explicitly in the summary above, but the two points were a bit muddled in the paper and in the discussion. Yes, Ubicomp the paradigm is becoming so common-place that it may no longer be an interesting way to categorize one’s work, but Ubicomp the conference seems to mostly have papers that focus on a very specific brand of that paradigm. My long name for Ubicomp would be “enabling cool sensors and application of cool sensors in the wild” (with inversely varying degrees of “cool” and “wild”). Pretty much all of the papers in this year’s proceedings fall into this category. From an informal survey of my colleagues, this also seems to be the general perception of the kind of paper you might think about submitting to Ubicomp and may help understand why work that Gregory views as relevant to the paradigm doesn’t get submitted to the conference. Would trying to change the perception of Ubicomp the conference to include more of the stuff that is touched upon in Ubicomp the paradigm revitalize the conference? I argue that it wouldn’t (because the paradigm is becoming less relevant to research) and that Ubicomp the conference should take an alternate approach:

  1. Embrace the perception that has developed of it in the community and strive to do (or rather, continue doing) good work in enabling and understanding use of sensors even after the low-hanging fruit are picked. Essentially, Ubicomp’s new name should be SensorComp (I’m not arguing for a formal change, but you get the idea).
  2. By any definition, the 4th generation of computing will build upon the 3rd and SensorComp can contribute to that by building ties with other communities who will find SensorComp’s work relevant, including communities focusing on applications (e.g., health), technologies (e.g., wearables), and paradigms (e.g., social computing). I especially like the idea of collocating with relevant conferences once in a while.

So, rather than arguing that the conference should change or that it should attract different kind of work (good luck, cat herder!), I say that the conference should embrace what it does well, become THE place to publish that sorts of work, and be really in-your-face about it to other communities who would find that sort of work useful. Gregory says that Ubicomp is dead. I say long live SensorComp!

Now, please tear my ideas to bits, esteemed colleagues.