A large insurance company deployed a new area of a mobile app and wants to see how effective the tool is so they can further improve the user experience. During the project new constraints arise and there is an unexpected pivot.
The new area of the app was created with newer, modern interaction patterns which needed validation. The project would be a success if we were able to uncover new insights into how users interact with the patterns and app.
As testing was for the purpose of vetting the interaction patterns, the participant demographics were pretty general: people who downloaded mobile apps to their phone.
Team / Role
I single-handedly developed the research plan, wrote the script, interviewed and took notes, analyzed results, and presented findings to the business stakeholders. I had my team lead and other researchers available to bounce ideas off of.
We had only two weeks to prepare to present our findings, so formal recruitment and usability testing were not feasible. Additionally, the most current build was only available on test devices so testing was limited to the company campus.
To begin, I gathered existing test plans, scenarios, and artifacts from the research team.
The Research Plan
Given the short project timeline, we were not able to recruit external participants or able to work in a formal usability lab. Our initial research plan had us using a mailing list to recruit stakeholder colleagues and a conference room as a testing environment where we would capture information via video and with the assistance of a notetaker. We realized participants would have industry knowledge which could bias their results, but this was mitigated by our focus on interaction patterns and ease of use.
With this approach in mind, I wrote a rough test plan and revised it with assistance from my team lead and research colleagues.
Next, I wrote scenarios which ensured participants came into contact with all the new interaction patterns. After a round of revisions with my team lead, I sent the test plan and scenarios to our stakeholders for approval.
At this point, the stakeholders approved of the plan but determined that the mailing list recruitment process would take longer than initially anticipated. Given this additional hurdle, we decided to shift our format from informal usability testing to guerrilla usability testing.
My team lead and I met with the client’s UX research team to discuss strategies for performing guerrilla testing onsite at the client campus. This meeting provided us with excellent information for the best places to find research participants among other details, and confirmed that we were on the right path.
We decided to recruit participants in the campus cafeteria and lounge area between 11am and 2pm to catch the lunch crowds. The technique for approaching participants was to identify individuals who had completed lunch and were using their mobile device. I revised our test plan to include these changes and sent it to the business stakeholders for final approval.
As an incentive to participants, we offered a $5 Starbucks gift card for their time and input. I modified our existing consent form and began writing the script I would use when approaching potential participants. I decided to audio record participants to capture think aloud details and revised the scenarios to make room for note-taking both during and immediately following the testing sessions.
When I arrived on site for the first day of testing, I collected the iOS device, which I would be testing first. I organized my documents and arranged my script, incentive, screener, and consent form to make testing as smooth as possible, and then began approaching potential participants.
My approach to testing – recruit, run the scenario while making notes, review the audio immediately afterward – worked very well. The notes caught most issues and the audio helped me clarify anything I had not fully noted.
I was able to recruit five total iOS participants – four the first day and one the second day. I had anticipated recruiting Android participants as well but the development team was actively working on the Android build which made the test device unstable.
Android devices were ready for testing the following week, I recruited all five participants in three hours and testing went smoothly. In total, we tested the iOS and Android builds across four scenarios, recruiting five iOS and five Android participants who completed 14 total scenarios.
Back at the office, I analyzed both the Android and iOS findings. I collected all instances of issues or praise into a spreadsheet and coded them for themes. Once I had identified themes, I provided each a rating based on technical difficulty / scope and user impact.
From these two ratings I determined the overall priority of the enhancements from the perspective of the user to present to business and for the development team to consider adding to their backlog.
In the analysis phase I identified thirteen enhancements which I illustrated using device screen captures. I presented a full readout of the findings to the business stakeholders along with user quotes which illustrated the problems.
This project was even more successful than I was expecting and business was happy with the findings.
I gained a lot from the experience, both personally and professionally. I learned how to hone an approach script, manage my paperwork for testing, perform research analysis, and present findings to business stakeholders.