It's only a pleasure Lisi, and glad I could be part of you amazing Testing Tour. I learned many aspects today which I will use going forward regarding approach and things to keep in mind. I really enjoyed pairing with you on this 1. #accountabilityBuddy #TestingTour— Toyer M (@tottiLFC) 17. Juli 2018
What topic to pair on?That's one of the common first questions that come up after scheduling a session. In this case, Toyer said he would love to do something from scratch, gather test ideas together, align our thinking. We know each other and our viewpoints quite well, but we haven't yet practiced testing together. He wanted to see how we really go about testing certain things, ask questions and see each other's thought processes when actually testing.
Based on his input, I thought let's tackle an application we both don't know, explore it and come up with a very first test strategy based on the gathered knowledge. As I knew that Toyer discovered mind maps for his work some time ago and learned to love them for many purposes, I thought that this could be our way to document our very first strategy draft, keeping it lightweight, easily editable, visual.
Having a list of potential applications to tackle at hand, I reached out to Toyer and asked whether he wanted to learn more details about my idea before our session, or rather not. He answered "I'm tempted to say share more information.. but I would like to be surprised too, as I want to see how I can tackle something without preparing". So he chose the latter option to which I can really relate. Over the last years I am continuously learning to tackle things without over-thinking; and I'm not done with learning this yet.
Evolving a Strategy While ExploringAt the beginning of our session I presented my topic idea to come up with a test strategy for a new product, and Toyer agreed to go for it. Lucky me, otherwise we both would have had to cope with an unprepared situation! ;-) So I provided different options as our system under test of which Toyer chose Blender, an open source 3D creation tool. I had some rare encounters with this application back at my first company when we developed an AI middleware for game developers, but had hardly touched it ever since. Toyer thought it looked really promising as we normally don't get to test these kinds of applications.
Toyer shared that first of all, he would ask which kind of need this application wants to fulfill and do related research upfront. For the limited time box of our session, however, we decided to skip this and explore ahead. Toyer accepted my suggestion to draft our strategy in a mind map, so we created it and continuously grew it according to our findings. He also agreed to do strong-style pairing while exploring, so I started up my favorite mob timer, set the rotation to four minutes, and off we went. It quickly became clear that we knew each other well. Collaboration was easy and communication fluent. We could fully focus on exploring Blender from a high level point of view, trying to grasp its purpose and main capabilities, identifying limitations and potential risks. We were actually doing a recon session, just as Elisabeth Hendrickson describes in her awesome book Explore It! Reduce Risk and Increase Confidence with Exploratory Testing.
Throughout our session we gathered lots and lots of findings and discoveries, adding more and more important points to our test strategy.
- Learnability. The application is not intuitive at all. It's a real expert tool. Still, everybody is a first-time user once, and even if you know the domain the user experience this product offers is not that great.
- Functional scope. The more we explored, the more functionality we discovered. The whole tool seems really powerful, but again, is not easy to understand.
- Screen resolution. The GUI is cluttered with many UI elements, sidebars, popups and more. On our laptop screen that was already a challenge, and it will still be one on larger screens.
- Menus, popups and tooltips looked very similar which made it hard to distinguish the purpose of each.
- Feedback on actions was often missing or confusing.
- Some sidebars displayed content related to views we previously visited, not getting updated with information of the current view. This way, they sometimes obscured needed information.
- Some actions worked in one area but not in the same way in another.
- Some sidebars were named x but the header label said y.
- Some delete actions asked for confirmation, others just instantly deleted the item.
- Portability. We tested Blender on macOS. The product is also offered for Windows and Linux. At several points we found strange unexpected behavior and made the assumption that it might have been due to porting issues for the macOS version. For some points, I could even confirm that assumption when writing this blog post and checking the Windows version of Blender.
- Maintainability and reusability. The GUI offered many sidebars, popups and views that shared similar layouts and menus. We noted to investigate whether they were duplicated or re-used components.
- Robustness. We encountered error messages on invalid input that was not caught or prevented.
- Automatability and testability. The application offers a Python API. We found Python commands offered in tooltips, the API reference in the help menu and an integrated Python console. The console behaved differently than we knew it form other terminals, but still it was very interesting that you could automate operations; which would also increase the product's testability.
- Discoverability and documentation. The help menu offered an operator cheat sheet; clicking on it triggered a temporary message to look at the OperatorList.txt which we could not find. Only later I learned that we had not come across the text editor where you could open named text file. What a hidden support feature. Also, we found the linked release notes page to be empty. We didn't dive deeper into the manual, but all available documentation would have to be tested as well, especially for an expert tool like this.
Time to ReflectWe covered a lot in the limited time. We gathered lots of insights, ideas, assumptions to verify. We tested a product in a domain we both don't know much about, being a desktop application instead of our usual web applications. We tried to gather information and keep a holistic view on things, not diving deep yet, focusing on uncovering the different aspects to test for such kind of a tool. All the way mapping out our world and the points to tackle in our test strategy. As we learned more, our strategy evolved. We didn't reach an end by far. If this would be our product to test, we would iterate over it while learning more.
The unknown domain had its own charm. We approached the product as a black box, not looking under the hood in the first place. We brought lots of testing knowledge, but quickly saw we lacked the domain knowledge. Toyer made an important point here: when hiring a tester for such kind of a product, it would be best to look for someone who was already exposed to these kind of tools or related areas of expertise. We could still provide lots of value and ask questions which might go unasked otherwise, but we would quickly pair up with a product person or business analyst to model the product from domain point of view. And also sit with developers to model the product's architecture.
Pairing up helped once again a lot. To see different things at the same time, by looking at different parts of the screen. To grow the mind map way faster than any of us would have done on their own. And to include different thoughts and viewpoints.