Children as Inventors of Happiness Technologies

Click to expand the infographic!

Happiness is a practice. People can achieve happiness by applying specific skills to their interaction with the world. These skills include gratitude (reflecting on and expressing thankfulness for positive aspects of one’s life), mindfulness (practicing awareness and acceptance of the present moment), and problem solving (reflecting on thoughts and feelings to find alternative interpretations and solutions). About 44% of school in the U.S. include programs that teach such social and emotional skills to children (e.g., Penn Resiliency Program), and a number of investigations have demonstrated the effectiveness of these approaches. However, one of the challenges faced by school-based programs is that they provide few (if any) opportunities for children to extend the practice of these skills to their lives outside of the classroom. Technology may help address this gap by providing engaging opportunities to revisit happiness practices outside of the classroom and integrate them into the everyday lives of children.

Prof. Stephen Schueller (Clinical Psychologist, Northwestern University) and I partnered to consider and design new technologies to support children in practicing gratitude, mindfulness, and problem solving skills. While Stephen has a great deal of expertise in positive psychology and I know a fair bit about designing technology for children, we also wanted to make sure that our approach represented children’s voices, priorities, and values. We collaborated with the Y.O.U. (Youth & Opportunity United) summer program to train twelve children in becoming “Happiness Inventors.” Through fourteen 90-minute sessions, we worked with the children to understand their definitions of happiness, to teach them age-appropriate gratitude, mindfulness, and problem solving exercises, and to provide them with the knowledge and structure to become inventors of new technologies to help kids practice happiness skills. Through these session, children brainstormed over 400 ideas and developed many of these ideas as sketches, prototypes, and videos. The video the children made documenting a few of their outcomes is below.

By conducting a content analysis of the children’s work, we found a number of important implications for future technologies aiming to support the practice of happiness skills. First, we found that children’s interpretations of positive psychology concepts like gratitude, mindfulness, and problem solving may not always match adult interpretations and perspectives of these concepts. For example, many children’s interpretations of happiness across all three concepts revolved around external influences on happiness, such as getting practical help (e.g., with homework) or avoiding unpleasant situations. These may not be typical concepts within positive psychology, but these concepts are worth considering when developing interventions for children. If a child’s mental model of happiness and how it can be achieved does not match the model forwarded by a particular intervention, the intervention’s effect may be limited for that child. Researchers should make the effort to engage with the mental models of the particular child audience and, if necessary, work on changing counterproductive belief structures before deploying positive technology intervention.

Second, the children’s designs pointed to a number of specific features and engagement approaches that may increase the appeal of positive technologies. One noteworthy example is that children often imagined technological solutions that could understand and react to various internal states, such as thoughts and emotions. Indeed, a growing number of efforts are attempting to glean psychological and emotional states from various affective computing technologies as diverse as EEG, galvanic skin response, and automated sentiment analysis on social media. Positive technologies that make use of such features may have particular appeal for children who are still learning to understand and interpret their affective states and the affective states of others. Another noteworthy aspect is in the number and diversity of approaches that the children posited for encouraging sustained engagement with interventions. While gamification and social interaction were two important approaches that have been considered in a number of previous interventions, there were also a few surprising ideas. One of these surprises was sensory engagement. Many of the children’s ideas posited that somebody could be motivated to engage with an intervention simply because it was beautiful and appealing to the senses, whether it be visual, aural, olfactory, or haptic. This is not a well-explored approach in the design of positive technologies and it would be interesting to know the smells associated with happiness (our participants suggested some, which included warm chocolate chip cookies and the smell of one’s own bed).

Finally, another design insight from this investigation emerged from observing the types of technologies that children cited in their inventions. It was clear that children were not drawn to interventions for laptops or desktops. At the very least, the implication of this is that web-based interventions for children should be designed using a mobile-first paradigm. However, we should emphasize that this is just a stop-gap solution, as even mobile-first web-based solutions struggle to achieve sustained engagement. Indeed, there may be an opportunity to increase engagement by thinking outside the box (or the computer, as the case may be here). The children in our study suggested a number of solutions that went beyond apps and websites. These instantiations included wearable accessories and apparel, toys and gadgets that may operate independently or in conjunction with a phone app, smart furniture and home infrastructure, robots and drones, and public kiosks and displays. It may be fruitful for designers to consider their positive technology interventions not as “sites” that children “visit,” but rather as tools that live alongside with them in the real physical world.

There’s a lot more in our Journal of Medical Internet Research paper, so check it out if you’re interested!

Advocating for HCI in Computer Science Departments

Note: This post generated a fair bit of discussion on Facebook and I’m adding (as block quotes) a few of the comments since this was first posted.

Recently, I’ve had a number of conversations where I’ve been asked whether I feel like I belong in a traditional Computer Science department. Given that my whole academic career has been centered on Computer Science (from declaring it as my major upon entry into undergrad to my current faculty position at U of M), I can definitely say that it is my community of practice. I get my funding from NSF CISE, I publish almost exclusively in ACM conferences, I teach only Computer Science courses — where else would I belong? But, I also realize that it is a salient question for many colleagues in my field, as HCI1 researchers are more and more likely to find their home in iSchools, Design, Psychology, Cognitive Science, and Human Factors programs. In this post, I want to articulate why I am of the opinion that Computer Science department and colleges/schools of Computing should be doing more to recruit and retain Computer Scientists trained in HCI.

Meaningful Evaluations – Most Computer Scientists develop algorithms and systems with the idea of improving upon some existing baseline. While some metrics of success may be straight-forward to articulate (e.g., time to run, prediction accuracy), many evaluations may require consideration of a richer and more nuanced set of factors. In the end, the most meaningful metrics look at how a system or algorithm performs in service of people. HCI is the branch of Computer Science where researchers have the training to plan and perform evaluations that lead to more meaningful, ecologically-valid comparisons. More importantly, Computer Scientists trained in HCI also have the fundamental understanding of the development process that allows them to carry out studies that lead not only to summative comparisons, but also to specific implications for how systems and algorithms can be improved in order to perform better in real-world evaluations. For example, an HCI evaluation may reveal the relative consequences of different types of algorithm accuracy errors, leading to an understanding of the kinds of improvements to accuracy that have most significant effects.

I agree with all the above, and would add two things: 1) some HCI work is more oriented around systems building, and that is more naturally centered in a CS department. Sometimes it is very technical application of existing techniques and sometimes it is new techniques – but this kind of work readily live in multiple places. 2) I think the best traditional technical work happens when it is done with close work with stakeholders. Even theoretical work can be stronger when considering the real needs and problems of users. Developing that understanding is best done in close proximity to them – thus integrating HCI into CS can help that process.

-Prof. Ben Bederson, University of Maryland

Diversity – Computer Science has a well-documented diversity problem, which has been growing in recent years. It has also been documented that both women and minorities are more interested in applying STEM to culturally-relevant broader impact contexts. Thus, it’s not surprising that of the few women and minority Computer Scientists who do make it through grad school, many choose focus on HCI. While CS departments should be making an extra effort to recruit and retain qualified women and minority candidates, my personal experience with faculty hiring is the opposite. I frequently see that given two candidates with similar training and publication records (in terms of venues and quantity), women are more likely to end up in iSchools while men are more likely to stay in Computer Science. If this trend is real, it is bad for CS because it is a lost opportunity to increase diversity. The problem is two-fold. First, a CS department may not make an offer to a qualified candidate due to a narrow definition of what constitutes technical work or a contribution to Computer Science. Second, even when an offer is made, the candidate may perceive a department culture that is unfriendly towards HCI work and elect to accept a competing iSchool offer (despite the fact that iSchools may pay less and have a higher teaching load).2 The New York Times recently ran an article showing that as more women go into specific field, salaries in that field decrease. We don’t know the mechanism by which this happens, but my worry is that this is the pattern that is emerging here. Of course, there could be a number of alternative explanations, but I think that it can’t hurt to collectively keep an eye on this.

Learner-Centered Teaching – There are many CS courses that do not require any specific area specialization to teach, most notably introductory classes to CS, computing courses for non-majors, and freshmen seminars. There are many reasons why HCI faculty may be particularly well-suited to achieve excellent outcomes in such courses: (1) there is a clear transfer of skills between designing user-centered systems and designing learner-centered curricula, (2) HCI research can be presented in these courses as a counter-example to popular misperceptions of Computer Science as an asocial, solitary pursuit, and (3) HCI faculty may provide stereotype-threat-breaking role models, given the larger proportion of women and minority faculty in HCI. In my opinion, Computer Science departments with a strong undergraduate education mandate should be actively seeking to recruit Computer Science HCI faculty.

I disagree with “there is a clear transfer of skills between designing user-centered systems and designing learner-centered curricula.” There’s a potential for transfer, but I’ve met some HCI (and even Ed) researchers who are really awful teachers. And I don’t like the idea of giving all other CS teachers an excuse, e.g., “I don’t do HCI, so I don’t have the background to be a good teacher.”

-Prof. Mark Guzdial, Georgia Tech

I wanted to keep it to these three top reasons, but there are a number of other ways that Computer Science as a whole benefits from the specific skills that Computer Scientists trained in HCI bring to the table, including more interdisciplinary work, broader impact of research, and considerations of critical issues (e.g., bias) at design stage.

If you are currently in a Computer Science department and you agree with my arguments, there are a few clear steps you can take to help your department: (1) advocate for soliciting and hiring Computer Science candidates trained in HCI, (2) confront and question arguments against hiring a candidate that focus on vague concern of “not being technical enough” (especially, when candidate is trained in Computer Science and publishes in computing venues), and (3) articulate narratives of Computer Science that include rather than exclude HCI.

1. I use HCI (Human-Computer Interaction) here as the broader discipline, but this includes many related disciplines including HRI, Game Studies, Social Computing, Mobile & Ubiquitous Computing, etc.
2. I am uncomfortable calling out specific schools and contrasting specific people. I am basing this entirely from my personal experience on the job market 3 years ago. I received 3 offers from iSchools and 3 offers from CS departments. Your mileage may vary.

Purpose, Visibility, and Intersubjectivity in Video-Mediated Communication Technologies

Video-mediated communication may be able to benefit from a number of novel technologies, but designing for a good experience requires considering purpose, visibility, and intersubjectivity for both partners.

Skype, Google Hangouts, Facetime, and ShareTable are all examples of real-time video-mediated communication technologies. Designing, implementing, and deploying novel systems of this sort is a big research priority for me and every semester I get a few entrepreneuring students approaching me with ideas for cool new technology to try in this space: robots, virtual reality, augmented reality, projector-camera systems, and more. Frequently, I ask them to consider a few things first and if you’re new to thinking about computer-mediated communication, these may be helpful for you as well (many of these ideas come from my work with play over videochat).

In this case, let’s assume the “base case” of two people—Alice and Bob—using a potential new technology to communicate with each other (though the questions below can definitely be expanded to consider multi-user interfaces). Consider:

  1. (Purpose) Why is Alice using this technology? Why is Bob? The answer should be specific (e.g., not just “to communicate,” but “to plan a surprise party for Eve together”) and may be different for the two parties. It’s good to come up with at least three such use cases for the next questions.
  2. (Visibility) What does Alice see using this technology? What does Bob see? Consider how Alice is represented in Bob’s space, how Alice can control her view (and then flip it and consider the same things for Bob). Consider if this appropriate for their purposes. For example, maybe Alice is wearing VR goggles and controlling a robot moving through Bob’s room. It’s cool that she can see 360 degree views and control her gaze direction, but what does Bob see? Does he see a robot with a screen that shows Alice’s face encased in VR goggles? Does this achieve level of visibility that is appropriate for their purpose?
  3. (Intersubjectivity) How does Alice/Bob show something the other person? How does Alice/Bob understand what the other person is seeing? The first important case to consider is how Alice/Bob bring attention to themselves and how they know if their partner is actually paying attention to them. If Alice is being projected onto a wall but the camera for the system is on a robot, it will likely be difficult for her to know when Bob is looking at her (i.e., when he’s looking at the wall display it will seem that he’s looking away from the camera). It’s also useful to consider the ability to refer to other objects. Using current videochat this is actually quite hard! If Alice points towards her screen to a book on the shelf behind Bob, Bob would have no idea where she’s pointing (other than generally behind him). Solving this is hard—it’s definitely an open problem in the field—but the technology should at least address it well enough to support the scenarios posed in question 1.

Generally, I find that new idea pitches tend to propose inventions that provide a reasonable experience for Alice but a poor one for Bob. It is important to consider purpose, visibility, and intersubjectivity experience for both of them in order to conceive a system that is actually compelling.