Sam Henschen

Case Study: Well-being Games

 
 

Digital Games and Teens’ Well-being

 

Role: Lead UX Researcher (led studies for 2/4 games. Was research collaborator for two other games).

Direct Collaborators: UX Research Team, Director of User Experience, Software Developers

Stakeholders: Director of Research and Measures, Product Manager, Director of Software Development

Methods: Usability testing/ contextual inquiry, user interviews

TL;DR: Working under tight deadlines, I led a rapid usability testing program for a series of digital well-being games for teenagers. I generated a range of insights that played a critical role in the in the direction of the games’ UX/UI. The product changes made as a result of my work contributed to the games’ viability as research measures for scientific studies. My team and I had less than two weeks to complete each study from end-to-end.



The Background

HMI developed a series of digital games that would serve as a means to scientifically study the wellbeing of teenagers. By playing the games, users would be engaging in a range of well-being exercises. On the back end, the data generated would be used for scientific studies. Version 1 of four different games had been developed, but none had undergone any user testing. That’s where I came in. In this case study I’ll be discussing the two games for which I was the research lead.



Logistical Challenges

One of the challenges of this study series was the logistics of working with people under the age of 18. For example, we had to get consent forms signed by parents or guardians in addition to the participants. The extra steps and correspondence made it difficult to recruit quickly. We ended up with 4-6 participants for each study. I normally would have wanted at least a few more participants per game, but the numbers we had turned out to be sufficient in this case.

In addition, because of certain regulations, we had to get creative about which tools we used to run the study. For example, we had planned to use UserTesting to execute the study and collect our data, but UserTesting would not allow us to run our own study with people under 18 using their platform. So we had to run the study manually with standard video conferencing and screen recording. Ultimately the participants were very helpful and understanding. Our approach to the tools used did not cause them any friction.



Objectives:

  • See if there are any red flags in the UX/UI before moving to the next step - scientific validation testing.

  • Gather user insights on how the UX could be improved:

    • How do teens feel about their experience playing the game? 

    • Is there anything confusing, discomforting, or distressing in the game?

    • How easy is it to understand the games’ instructions?

    • What, if anything, do we drop/edit/change as we move forward with game development?



Game 1: “Cosmic Garden”

  • The objective of the game is for the user to reflect on their feelings/emotions over time by planting “star seeds” in their digital garden (sample screens below).

 

1. Start screen

4. Emotion selection

7. User answers further questions

2. Welcome/instructions

5. Star seed falls to garden

8. Star seed falls again, user continues to plant

3. Location selection

6. User plants star seed

 
  • Each participant interacted with the game three times per day over the course of three consecutive days. I sent each participant a dedicated link for each of their game sessions. The participant would take a screen recording of their session and upload it to a dedicated repository. I moderated each of the participants’ last (ninth) session live and conducted a feedback interview directly afterward.

    • Because of a range of regulatory and logistical constrictions, the method of having the participants take screen recordings and upload them was our best option for conducting the study. I was worried about causing the participants friction, but thankfully they did not mind and the study was not adversely affected.

Synthesis and Results

I synthesized the qualitative data using affinity mapping on a Miro board and included relevant images and screenshots. Here were some of the major takeaways:

  • All participants really enjoyed the game overall, and seemed to get a lot out of the ways in which the game prompted them to reflect on their emotions, and how aware or not they were of their emotions. The graphics, and watching the garden grow over time, were very well received.

  • Several users indicated that the answer choices for the questions didn’t always reflect what they were feeling and that the range of choices was limited.

    • Resulting product change: as a result of the above insight, a wider range of answer choices were included for many of the question sets.

  • 3/4 participants noted that when they completed their last session, there was no UI to celebrate their completion or confirm that they had finished the game.

    • Resulting product change: a completion text bubble was added at the end of the game to confirm the completion, but I felt that more a of a celebration or affirmation should have been added.

  • Participants didn’t experiment much with moving their plants around the cosmic garden after the first session or two. However several participants moved their plants by accident in later sessions.

  • Participants encountered several bugs that had not previously surfaced. For example, for some sessions, users experienced a lag when they tried to drag the star, or sometimes the star wouldn’t move at all.

    • Resulting product change: The star interactions were reworked to be much more responsive.



Game 2: “Sync to Swim”

  • The objective of the game is for users to reflect on their values with regard to their relationships, leisure/hobbies, and learning/personal growth. Then in the second section of the game, users reflect on how aligned they feel in that moment to the values they previously selected.

 

1. Start screen

4. Turtle eggs are laid according to completion of the values section

7. Upon completion of part 2, eggs hatch had baby turtles head to ocean

2. Game intro/instructions

5. Part 2 intro/instructions

8. Baby turtles join grown-up turtles in the ocean, end of game

3. Example of one of the values selection screens

6. Example of value “alignment” exercise

 
  • I conducted this round of game testing via moderated usability testing/ contextual inquiry followed by brief interview about the experience.

Synthesis and Results

I synthesized the qualitative data using affinity mapping on a Miro board and included relevant images and screenshots. Here were some of the major takeaways:

  • Users encountered a range of unforeseen bugs. This was unfortunate because it made evaluating the UX/UI more difficult. However I recorded all the bugs in detail and the developers were able to later solve these.

  • Users encountered several confusing UX/UI elements. For example, in the version of the game that users tested, during part 1 (values selection), there was originally a floating “Next” CTA that would allow users to proceed to the next part of the game. The CTA would appear even after they completed 1/3 values selections, so almost all users clicked this immediately without understanding that there were two other turtle nests to complete the other exercises.

    • Resulting UI change: the developers reworked the UI so that the Next CTA would only appear after the entire Part 1 of the game was completed.

  • Users were also confused by some of the instructions. For example, during Part 1 (values selection) some users thought at first that they were supposed to rank their values. Another user didn’t realize at first that they could select multiple values.

    • Recommendation: I suggested that the instructions for each section be reiterated on the activity screen itself and not just on the intro screens.

  • Some users felt that the choices of values didn’t really reflect them. They would prefer to have more choices, and perhaps even write their own. I made recommendations accordingly, and changes were considered.

Overall Conclusions and Reflections

  • In a relatively rapid testing context (two weeks to test each game), I was able to uncover a range of issues and insights within the user experience of these games. If it were not for this series of tests, the data from scientific studies that the games were used to conduct would have been a lot less reliable, if not inviable.

  • I learned a great deal about user testing with people under age 18. It has a unique set of challenges, but it was very rewarding.

  • Users understood up front that the games would be contributing to scientific studies and would not be traditional digital game that they might be used to. Despite the fact that the games were more like digital exercises, users were much more delighted by playing them than I expected. Some users commented that the games allowed them to think and reflect in ways that they never had before. SO the games had a real-time positive impact on teens’ well-being in addition to the resulting data being used to study teens’ well-being.

  • The games ended up being very successful as scientific research measures. They were used in a number of different studies and several peer-reviewed publications were generated. However, I wish the product team (who had their own deadlines) would have been able to iterate on the games further to improve the UX/UI.