- Research
- Strategy
Users were frustrated about the lack of a modern interface for legacy software. We completed 40-hours of software training and a 160-hour heuristic evaluation.
Modernizing Legacy Systems
Challenge
Our client had received complaints from software users that their flagship retail transaction managerment software was difficult to learn, confusing to use, and (in some cases) was causing frustrating system errors during important client meetings. Since the users of this system are employees of a franchise, not the national brand, it’s important to note that one contextual challege for this project is shaped by the fact that users don’t have the option to use any other software— they’re captive. In this case, the product team that ordered our research anticipated using it to set the stage for improvement prioritization for the subsequent 12-24 months.
Approach
Scenario-Based QA
The software the client asked us to evaluate has thousands of possible permutations of screens, client experiences, and required inputs. As complex as you can imagine, the client’s software helped a retail agents input answers from customers that are (sometimes) given from directly across the desk and (sometimes) retrived from asyncronus remote collaboration with the client. As such, it was imperative that we created a research guide in coordination with the product team to ensure that our client-side partners agreed on the exact user scenarios we’d replicate for research and evaluation. We used the scenarios to pressure-test the client’s system using varying browsers, screen sizes, and operating systems.
Heuristic Evaluation
We developed a test plan for two researchers to each (independently) complete and document at least three end-to-end process passes through each phase of the user journey. We used our knowledge of adjacent systems to compare and contrast how other heuristic patterns in other client-side systems might inform future changes to this legacy program. Our report included detailed findings on error severity, difficulty of fix, experience upside, and description of lift required to remediate each item.
Contextual Inquiry
We physically visited three retail offices where this software was in use and watched our client’s employees enter case studies into the system. Since the retail computer systems include security requirements than prohibited us from recording the screen, we used a multi-camera GoPro setup to document the users screen, their keyboard inputs, and their mouse movements. We asked questions (when appropriate) as the user navigated through the practice factuals and asked them to narrate outloud as they completed the process— including any stumbling blocks from authentic client experiences.
Results
Our top finding on this project was that newer users lacked sufficient training or sandbox experience to show mastery of the retail system. Seasoned users simply did’t have the same problems as rookie users. Trouble is, retail accounting customers don’t care whether it’s your employee’s first day or not.
The nature of this particular transaction was too important. The system had to work the first time, no hiccups. This is especially true since the software is being used in front of them. If customers didn’t trust the software, they’re not going to want the employee to enter their private financial information into it— they’ll get up and walk out. While trying to learn what UI updates the team might want to prioritize, we helped them prioritize almost a dozen areas where the existing legacy software could improve almost overnight.
Ultimately, we delivered and (number) item heuristic report generated by 30 hours of roleplaying through the system in the client-supervised usability testing facility. Our reporting generated over 600 observations thart we coded into custom-feedback taxonomies built using Nielsen Norman Usability Heuristics. These types of searchable, sortable usability reports helped the team make agile decisions about their product roadmap thoughout the next stage of the product lifecycle.