UX QA is a test that a product’s end-to-end interaction and experience design is functional, coherent, and contextually appropriate. I developed a UX QA process for Google Assistant Auto to enable a generalist UX team (i.e. non-conversation designer) to understand and validate voice interaction test cases specific to Assistant, individual vehicle makes, and combinations of the two.
Using Google Assistant with Android Auto has a lot of technical challenges. Users expect to use all of their favorite Assistant features, but also do anything Android Auto can do, as well as know about the vehicle itself. In order to QA these features, a tester needs to know how to invoke them, and what’s supposed to happen!
Working cross-functionally, I adapted engineering test cases for every Assistant feature as well as tests designed to be run on a vehicle emulation workbench. My goal was to simplify the process enough so anyone with a strong UX background and a basic understanding of voice experiences should be able to execute our test cases to fully evaluate the interaction design and qualify it as passed or failed.
I increased the quality of the testing, and most importantly, the team’s ability to identify and report defects. I partnered with design, product, and engineering leaders in every team that had need to test voice features, vehicle features, or Android Auto features, and identified their UX goals, documentation, and tool sources.
I also ran day-to-day testing and triage of bugs, blocked test paths, and identified resource and competency gaps to leadership. Finally, I delivered recommendations for scaling and simplifying the process by partnering with engineering and UX to bring QA into their processes earlier.