Giving Back to the Participants: When You Can’t Leave a Working System

My research depends on participants volunteering their time, energy, and personal stories. They are informants, co-designers, beta testers, and content experts who contribute so many insights and yet frequently remain anonymous in publications resulting from the work. Like many other researchers, I struggle with how I can give something back (beyond free pizza or gift cards for participation.

One of my favorite readings on the topic of “giving back” is the Le Dantec & Fox CSCW 2015 paper on community-based research, which really examined how institutions may exploit vulnerable communities (despite having the best intentions) without actually contributing back anything that is of value to the participants. University research may appear to promise things like policy and institutional change, increased access to resources, or (in the case of HCI) working technologies that address user needs. The truth is that these are almost never delivered in the form or at a time that can actually directly benefit study participants. I know for my research, I have no source of funding that would support developing and maintaining a production-quality system beyond the initial novel prototypes in constrained deployments. So is there a way to give back that doesn’t over-promise? Here are five strategies that I’ve tried in my work:

PARTNERING with an ORGANIZATION (which is willing to develop and maintain prototype ideas): If your research is done in collaboration with an existing technology company, your findings and prototypes can be directly integrated into their services without the researcher needing to coordinate long-term maintenance (in turn, the company gets a free research team and initial development of cool feature ideas). This is currently our approach in our collaboration with CaringBridge.org, as our findings  are directly driving new features and directions (this work is largely in progress, but here are some publications). Obviously, these partnerships can take a lot of time and effort to set up and manage, but I really think this is a great pathway to positive impact.

STUDY PARTICIPATION as BENEFIT: In many cases, being part of a participatory design process can tangibly benefit the co-designers. For some good reading, I love the way the KidsTeam at University of Maryland has reflected on the perceived benefits of and ethical considerations for co-design with children. It is certainly important to think about the possible benefits up-front and consider how they can be amplified and measured. If the benefits are pedagogical, this may mean thinking through the specific learning outcomes holistically and for each session. When planning my studies, I consider how being part of it may help participants identify new resources that may personally help them, develop actionable resilience skills, etc. This does typically mean that I can’t accomplish my studies as a single design workshop, instead requiring multiple sessions for the partnership to be able to yield benefits both to the researchers and to the participants.

LEAVING BEHIND STOP-GAP SOLUTIONS (with currently-available technologies): While you can’t usually leave behind and maintain a fully functional prototype system, there may be great opportunities to leave behind solutions that may be better than what the family had available in the past. For example, after running a study with a novel prototype that I had to remove, I left behind dedicated smartphones set up for easy videochat (just off-the-shelf phones with Skype installed). This wasn’t as good as my prototype, but it was much better than the previous audio-only setup that the family used. This did require additional resources (in this case, phones), but it’s worth it if it leaves the participants better off than before you got there.

AUTHORSHIP or ACKNOWLEDGEMENT: This one doesn’t apply if you’re working with communities that face stigma or want to remain anonymous, but there are cases where it may be ethical to credit creators or acknowledge co-designers. We are currently working on a project with Middle School co-designers, where all contributors will be able to choose to be listed as co-authors. After all, they spent a whole year with us learning how to be designers and coming up with ideas — they should get something that they can put on their resumes for college! This does require thinking through at study-planning time so that a special check box can be included on the consent form.

DISSEMINATING INFORMATION on BEST PRACTICES (with currently-available technologies): After doing a formative study, you may be in a really good position to share best practices for specific challenges. For example, after completing my research on parent-child communication in separated families, I wrote a blog post on practices, strategies, and tools that may help families make best use of currently-available systems. I distributed this to my participants directly, but also left it online for any other families who may be looking for ideas or advice. In another example, I led a webinar for medical practitioners to understand the technologies and practices that work well for people in recovery. Since the medical practitioners are often the first “line of defense” for people in recovery, this is a great way to disseminate potentially-helpful information. The good news is that this approach is also really consistent with and rewarded by the NSF expectations for Broader Impact.

Are there practices that you’ve tried in your research that have helped you give back to your participants? I’d love to hear other ideas, because I really think this is a fundamental tension and challenge in HCI research.

Note: This blog post started as a conversation at the CSCW 2018 Workshop on “Conducting Research with Stigmatized Populations.” Stigmatized participants have greater risks from participating in research, so thinking about ways of amplifying benefit using strategies above is particularly critical. However, there is no reason why we can’t think about amplifying benefit for all populations!

Six Strategies for Including Children as Stakeholders

Adults are not good proxies for understanding the needs and experiences of children. And yet, I see the “proxy” approach surprisingly frequently in studies of home and family, health informatics, education and other HCI domains. I’ve recently written a case study on this topic for a new edition of an HCI textbook (Baxter, K., Courage, C. & Caine, K. 2015. Understanding Your Users: A Practical Guide to User Research. Methods, Tools and Techniques. Morgan Kaufmann) and I want to share some of the ideas here.

When seeking to understand families and children, it is critically important to include children in the research activity. While it may be tempting to use parents as proxies for gauging the family’s needs, this is insufficient for understanding the complexities of the family dynamic. In my own work, I’ve seen many examples of non-consensus and many instances where parents’ perceptions of their child’s experience diverged from the child’s. I provide several examples in the case study, but here I instead want to focus on six specific strategies to help researchers include children in user studies:

  • Working with children requires special considerations while preparing the protocol and assent documents. Children are not able to give informed consent, which will have to be obtained through their parents, but the assent procedure gives them the opportunity to understand their rights and what will happen in the study. It is important to emphasize to the child that they can withdraw from the study, decline to answer any question, or take a break at any time. In designing both the assent document and interview protocol, keep specific developmental milestones in mind. In my experience, I have been able to interview children as young as 6, however I always check the comprehension level of the protocol by piloting with friendly participants in the target age group.
If you interview in a lab and you wear a lab coat, you're gonna have a bad time talking to kids.

If you interview in a lab and you wear a lab coat, you’re gonna have a bad time talking to kids.

  • To encourage children to be open and honest, the researcher should actively work to equalize power between the child and the researcher. Children spend their lives in situations where adults expect the “right” answer from them. To encourage the child to share honest opinions and stories, the researcher needs to break through this power differential. There are a number of details to consider here: choose an interview setting where the child has power (e.g., playroom), dress like an older sibling rather than as a teacher, encourage use of first names, and let the child play with any technology that will be used (e.g., audio recorder) before starting. Above all, emphasize that you are asking these questions because you do not yet know the answers, that the child is “the best at being a kid,” and that there is no wrong way to answer any of the questions.
  • Parents make the decisions in a study that concerns their children, which introduces a unique constraint. As a researcher, you will have to respect the parents’ decisions: you may not be able to interview the child separately and you cannot promise the child that anything will be kept private from the parents. However, you can explain to the parents why it is important for the child to have a chance to state their perspective in private. In my experience, most parents are willing to provide that private space, especially when the study is being conducted in their home where they worry less about the child’s comfort.
  • Children may struggle with abstraction, so ask for stories about specific situations. For example, instead of asking, “how do you and your dad talk on the phone when he’s traveling?” ask, “what did you tell your dad last time he called you?” It may take more questions to get as all the aspects you want to discuss, but it is much easier for children to discuss things they recently did rather than provide an overall reflection. This is most important with younger children, but is a good place to start with any participant.
Children talk more during show-and-tell than just to answer questions.

Children talk more during show-and-tell than if you just ask them questions. (That’s my brother, by the way!)

  • Additional effort may be necessary to engage a shy child and one way to do so is to encourage the child to show-and-tell. For example, “Show me where you usually are when you think about your mom?” or “Show me some apps that you use with your dad on your phone?” Use the places and objects shared as stepping-stones to ask more nuanced questions about feelings, strategies, and preferences.
  • An example of a child's drawing of technology (a holograph robot for communication).

    An example of a child’s drawing of technology (a holograph robot for communication).

    Lastly, incorporating drawing and design activities may help the child get into the “open-ended” nature of the study, be willing to be a little silly, and reveal what may be most important to them. For example, I asked children, “What might future kids have to help them stay in touch with their parents?” These drawing are not meant to produce actionable designs, but will reveal important issues through their presentation. Listen for key words (e.g., “secret” = importance of privacy), look for underlying concepts (e.g., “trampoline” or “swimming pool” = importance of physical activity), and attend to common themes such as who would be interacting with their future device, where, and how often.

I hope that these six strategies demystify some of the processes of including children in a study. Please, let me know if you have any additional advice or any experiences in working with kids that you’re willing to share.

Designing Technology for Major Life Events Workshop

High emotional impact and the value of the journey are two big aspects of designing tech for major life events.

High emotional impact and the value of the journey are two big aspects of designing tech for major life events.

While at CHI, I got the wonderful opportunity to help organize the workshop on Designing Technology for Major Life Events along with Mike Massimi, Madeline Smith, and Jofish Kaye. We had a great group of HCI researchers with a diverse range of topics: gender transition, becoming a parent, dealing with a major diagnosis, bereavement, and more. My own interest in the topic grew from my experience designing technology for divorce and technology for recovery from addiction. In one of the breakout groups, we discussed the challenges of designing technology in this space and some of the ways we’ve dealt with these challenges in our work. In this post, I want to highlight a few of these:

Building Tech is Risky. Building a system requires the designer to commit to specific choices and it’s easy to find something that wasn’t adequately considered after the fact. In tech for major life events, this challenge can be exacerbated because the consequences of a failed design might have big emotional repercussions (e.g., tech messing up some aspect of a wedding). Sometimes, it is a big question of whether we even should try to bring tech into a given context.

Ethics of Limited Access. Building technology to support a major life event may mean excluding those without the financial means, skills, motivation, language, etc. to use the provided intervention. Additionally, we frequently stop supporting a prototype technology at the end of the study which can be really problematic if it was providing ongoing benefits to the participants. Again, because of the high stakes involved, issues of ethics of access to technology may be exacerbated when designing for major life events.

Tension Between Building Your Own and Leveraging Existing. Many systems we build require some critical mass of adoption before they are really useful. This is particularly important with tech for major life events because there may be relatively few people facing a particular relevant context at any point in time. One of the ways to deal with this is to piggyback on existing systems (e.g., building a Facebook app instead of a new SNS), but this may cause problems when the underlying technology makes changes outside of the researcher’s control (e.g., privacy policies change, APIs stop being supported, etc.).

Asking the Right Questions about the System You Built. The final challenge is understanding what kinds of questions to ask during the system evaluation. On one hand, it is important to go into the evaluation with some understanding of what it would mean for the system to be successful and the claims you hope to make about its use. On the other hand, it is valuable to be open to seeing and measuring unintended side effects and appropriations of the technology.

I think my two major take-aways from this discussion were a greater appreciation of how difficult it is to actually build something helpful in this space and the insight that many of these problems can be partially addressed by getting away for the type of study that focuses on evaluating a single system design using a small number of metrics. The risks of committing to a specific design solution can be mitigated by providing multiple versions of the intervention, either to be tested side-by-side or to let participants play around until they decide which solution is a better option for them. The ethics of access can be ameliorated by providing low-tech and no-tech means of achieving the same goals that your high-tech approach may support (e.g., Robin Brewer built a system to let the elderly check email using their landline phones). Planning for multiple solutions when building using others’ APIs can lead to a much more stable final system (e.g., the ShareTable we could easily switch from the Skype API to the TokBox API for the face-to-face video). And lastly, the problem of figuring out what to ask during and after a system deployment can be addressed by combining quantitative methods that measure specific predicted changes with qualitative methods of interviewing and observation that are more open to on-the-fly redirection during the course of the study. Overall, diversity of offered solutions, flexibility under the hood of your systems, and diversity of methods used in the evaluation lead to a stronger study and understanding of the target space.